Extend the buildMachines option to support specification of
supportedFeatures and mandatoryFeatures in order to support all
configuration options of the nix.machines file.
When apcupsd has initiated a shutdown, systemd always ends up waiting
for it to stop ("A stop job is running for UPS daemon"). This is weird,
because in the journal one can clearly see that apcupsd has received the
SIGTERM signal and has already quit (or so it seems). This reduces the
wait time from 90 seconds (default) to just 5. Then systemd kills it
with SIGKILL.
This adds a special systemd service that calls "apcupsd --killpower"
(put UPS in hibernate mode) just before shutting down the system.
Without this command, the UPS will stay on until the battery is
completely empty.
Each attribute in this option should name an apcupsd event and the
string value it contains will be executed in a shell in response to that
event. See "man apccontrol" for the list of events and what they
represent.
Now it is easy to hook into the apcupsd event system:
services.apcupsd.hooks = {
onbattery = ''# shell commands to run when the onbattery event is emitted'';
doshutdown = ''# shell commands to notify that the computer is shutting down'';
};
This option allows administrators to add verbatim text to the generated
config file. I use this feature, for instance, to disable the default
route normally added by dhcpcd for certain interfaces.
apcupsd is a daemon for controlling APC UPSes. It is very simple to
configure. If you have an USB based UPS, the default settings should be
useable without further adjustments:
services.apcupsd.enable = true;
This will give you autodetection of USB UPSes, network access limited to
localhost (for security) and the shutdown sequence will be started when
the system when the battery level is below 50 percent, or when the UPS
has calculated that it has 5 minutes or less of remaining power-on time.
You can provide your own configuration file contents with this option:
services.apcupsd.configText = "contents of apcupsd.conf";
Bug/annoyance 1: When apcupsd calls "wall" (on powerfail etc. events),
it prints an error message because stdout is not connected to a tty (it
is connected to the journal):
wall: cannot get tty name: Inappropriate ioctl for device
The message still gets through though, to ctrl-alt-f[1-6] terminals.
Bug/annoyance 2: apcupsd tries to call "mail" (on powerfail etc.
events), and that fails because I'm not passing in any mail program at
the moment (because that would require more configuration options). A
solution to this would be to simply let the user fully configure the
apcupsd event handling logic in nix.
The README of nfs-utils explains that we must not notify clients
before nfsd is running, otherwise they may fail to reclaim their
locks. OTOH it's allowed but not required to start "rpc.statd
--no-notify" before nfsd. So for simplicity we do both after starting
nfsd.
Just like in the MySQL service module it really makes sense to provide a
way to inject SQL on the first start of the database cluster.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should integrate the logging more tightly into systemd, so for
example "systemctl status mysql" actually gives an overview about what's
actually going on.
This removes the logError option attribute, so in case you still want to
write into a logfile, I've introduced an option called extraOptions, so
you can use something like:
services.mysql*.extraOptions = ''
log-error = /var/log/mysql_err.log
'';
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
lighttpd doesn't support loading a module more than once. If you attempt
to load a module again, lighttpd prints an error message:
(plugin.c.131) Cannot load plugin mod_cgi more than once, please fix your config (we may not accept such configs in future releases
And it's not just the error message. The module isn't loaded (or is
messed up somehow) so that neither sub-service will work properly after
this.
This is bad news for the current approach to sub-services, where each
sub-service lists the needed modules in a server.modules += (...) block.
When two sub-services need the same module we get the above issue. (And,
AFAIK, there is no way to check if a module is already loaded either.)
First I thought about an approach where each sub-service specifies the
list of plugins it needs, and that a common server.modules = (...) list
is built from the union of those lists. That would loosly couple the
sub-services with the main lighttpd nixos module expression. But I think
this is a bad idea because lighttpd module loading order matters[1], and
the module order in the global server.modules = (...) list would be
somewhat cumbersome to control.
Here is an example:
Sub-service A needs mod_fastcgi. Sub-service B needs mod_auth and
mod_fastcgi. Note that mod_auth must be loaded *before* mod_fastcgi to
take effect. The union of those modules may either be ["mod_auth"
"mod_fastcgi"] or ["mod_fastcgi" "mod_auth"] depending on the evaluation
order. The first order will work, the latter will not.
So instead of the above, this commit moves the modules from
service.modules += (...) snippets in each sub-service to a global
server.modules = (...) list in the main lighttpd module expression. The
module loading order is fixed and each module is included only if any of
the sub-services that needs it is enabled.
The downside to this approach is that sub-services need a (tiny) bit of
change to the main lighttpd nixos module expression. But I think it is
the only sane way to do it (as long as lighttpd is written the way it
is).
References:
[1] http://redmine.lighttpd.net/projects/1/wiki/Server_modulesDetails
[2] http://redmine.lighttpd.net/issues/2337
This is because it's quite commonly used in the wild. Especially at some "weird"
server hosters (no names here) which doesn't allow to change the baudrate for
their serial consoles.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>