Ensure permission bits are (re)set on each system activation with
explicit chmod call.
mkdir -m MODE PATH will only set the permission bits if PATH is
*created*, which means users that have old NixOS versions will continue
to have the old 700 permissions on /var/log/journal until they chmod
manually. With this commit the permissions will be set to 755 on system
activation.
When apcupsd has initiated a shutdown, systemd always ends up waiting
for it to stop ("A stop job is running for UPS daemon"). This is weird,
because in the journal one can clearly see that apcupsd has received the
SIGTERM signal and has already quit (or so it seems). This reduces the
wait time from 90 seconds (default) to just 5. Then systemd kills it
with SIGKILL.
This adds a special systemd service that calls "apcupsd --killpower"
(put UPS in hibernate mode) just before shutting down the system.
Without this command, the UPS will stay on until the battery is
completely empty.
Each attribute in this option should name an apcupsd event and the
string value it contains will be executed in a shell in response to that
event. See "man apccontrol" for the list of events and what they
represent.
Now it is easy to hook into the apcupsd event system:
services.apcupsd.hooks = {
onbattery = ''# shell commands to run when the onbattery event is emitted'';
doshutdown = ''# shell commands to notify that the computer is shutting down'';
};
This option allows administrators to add verbatim text to the generated
config file. I use this feature, for instance, to disable the default
route normally added by dhcpcd for certain interfaces.
This makes the system journal readable by users in the
systemd-journal, wheel and adm groups. It also allows users to read
their own journals.
Note that this doesn't change the permissions of existing journals.
apcupsd is a daemon for controlling APC UPSes. It is very simple to
configure. If you have an USB based UPS, the default settings should be
useable without further adjustments:
services.apcupsd.enable = true;
This will give you autodetection of USB UPSes, network access limited to
localhost (for security) and the shutdown sequence will be started when
the system when the battery level is below 50 percent, or when the UPS
has calculated that it has 5 minutes or less of remaining power-on time.
You can provide your own configuration file contents with this option:
services.apcupsd.configText = "contents of apcupsd.conf";
Bug/annoyance 1: When apcupsd calls "wall" (on powerfail etc. events),
it prints an error message because stdout is not connected to a tty (it
is connected to the journal):
wall: cannot get tty name: Inappropriate ioctl for device
The message still gets through though, to ctrl-alt-f[1-6] terminals.
Bug/annoyance 2: apcupsd tries to call "mail" (on powerfail etc.
events), and that fails because I'm not passing in any mail program at
the moment (because that would require more configuration options). A
solution to this would be to simply let the user fully configure the
apcupsd event handling logic in nix.
This is in preparation of making a stable release/branch. The version
number is <YY>.<MM>, Ubuntu style, denoting the intended release
year/month. It also has a release codename ("Aardvark").
The README of nfs-utils explains that we must not notify clients
before nfsd is running, otherwise they may fail to reclaim their
locks. OTOH it's allowed but not required to start "rpc.statd
--no-notify" before nfsd. So for simplicity we do both after starting
nfsd.
Turns out that remote-fs-pre.target is not actually "wanted" anywhere,
so statd is not started before remote filesystems are mounted. But
remote filesystems do "want" network-online.target, so we can use that
to pull in statd and idmapd.
Not sure if this is really the right thing to do, but it works for
now. Background:
https://bugzilla.redhat.com/show_bug.cgi?id=787314http://hydra.nixos.org/build/5542230
When nixos-rebuild grabs a new kernel, it will build new spl/zfs
modules, which will change the service. On completion nixos will try and
restart the services which will try and import pools again, and
generally will fail.
The pools are already imported, we don't need to do it again..
Just like in the MySQL service module it really makes sense to provide a
way to inject SQL on the first start of the database cluster.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This should integrate the logging more tightly into systemd, so for
example "systemctl status mysql" actually gives an overview about what's
actually going on.
This removes the logError option attribute, so in case you still want to
write into a logfile, I've introduced an option called extraOptions, so
you can use something like:
services.mysql*.extraOptions = ''
log-error = /var/log/mysql_err.log
'';
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Grub uses mdadm to find out the device it is on, especially when mdadm itself
resides in a separate boot partition. When bootstrapping from a NixOS
installation CD, it's not a big issue because usually the paths from the Nix
store of the installation CD are matching with the ones in the chrooted
environment.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This allows to add additional raw disk images to the VM, which therein are
available as /dev/vdb, /dev/vdc, /dev/vde and so on. Especially when testing
partitioning, this could be useful.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This works around a bug in infinality that causes broken rendering in
some cases. Issue NixOS/nixpkgs#663.
Upstream suggests that "slight" is a better/safer default in any case.
It also looks better, IMHO, YMMV.
lighttpd doesn't support loading a module more than once. If you attempt
to load a module again, lighttpd prints an error message:
(plugin.c.131) Cannot load plugin mod_cgi more than once, please fix your config (we may not accept such configs in future releases
And it's not just the error message. The module isn't loaded (or is
messed up somehow) so that neither sub-service will work properly after
this.
This is bad news for the current approach to sub-services, where each
sub-service lists the needed modules in a server.modules += (...) block.
When two sub-services need the same module we get the above issue. (And,
AFAIK, there is no way to check if a module is already loaded either.)
First I thought about an approach where each sub-service specifies the
list of plugins it needs, and that a common server.modules = (...) list
is built from the union of those lists. That would loosly couple the
sub-services with the main lighttpd nixos module expression. But I think
this is a bad idea because lighttpd module loading order matters[1], and
the module order in the global server.modules = (...) list would be
somewhat cumbersome to control.
Here is an example:
Sub-service A needs mod_fastcgi. Sub-service B needs mod_auth and
mod_fastcgi. Note that mod_auth must be loaded *before* mod_fastcgi to
take effect. The union of those modules may either be ["mod_auth"
"mod_fastcgi"] or ["mod_fastcgi" "mod_auth"] depending on the evaluation
order. The first order will work, the latter will not.
So instead of the above, this commit moves the modules from
service.modules += (...) snippets in each sub-service to a global
server.modules = (...) list in the main lighttpd module expression. The
module loading order is fixed and each module is included only if any of
the sub-services that needs it is enabled.
The downside to this approach is that sub-services need a (tiny) bit of
change to the main lighttpd nixos module expression. But I think it is
the only sane way to do it (as long as lighttpd is written the way it
is).
References:
[1] http://redmine.lighttpd.net/projects/1/wiki/Server_modulesDetails
[2] http://redmine.lighttpd.net/issues/2337
This is because it's quite commonly used in the wild. Especially at some "weird"
server hosters (no names here) which doesn't allow to change the baudrate for
their serial consoles.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Starting with Zabbix 2.0 the order of data imports is important[*] and will lead
to errors if not done in the right order. Zabbix 1.8 works fine with the swapped
order as well, so this change shouldn't affect any pre-2.0 users.
[*] https://www.zabbix.com/documentation/2.0/manual/appendix/install/db_scripts
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Quoting from the manual about DBHost:
```
In case of MySQL localhost or empty string results in using a socket. In case of
PostgreSQL only empty string results in attempt to use socket.
```
https://www.zabbix.com/documentation/2.0/manual/appendix/config/zabbix_server
With this commit we should avoid some race conditions in systemd, because if the
host is set to "", there is no condition that postgresql has to be started prior
to the Zabbix server.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
At least the Zabbix 2.x web installer requires max_input_time to be set to 300
seconds. As it doesn't hurt to set it for the 1.x versions, I'm including it
here.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
If option is left by its default value, behaviour is the same as before, using
the configuration file created by the web interface.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is to avoid (in some cases) constant restarting of the Zabbix server, which
causes odds bugs and crashes in the exit handler (if it's too early during
startup).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
If we only need to generate a GRUB boot menu, we don't need GRUB
itself. This cuts 38 MiB from EC2 system closures (in particular
because it gets rid of the need for the 32-bit Glibc).
(cgit is "a hyperfast web frontend for git repositories written in C")
cgit is enabled like this (assuming lighttpd is already enabled):
services.lighttpd.cgit.enable = true;
and configured verbatim like this (contents of the cgitrc file):
services.lighttpd.cgit.configText = ''
cache-size=1000
scan-path=/srv/git
'';
cgit will be available from this URL: http://yourserver/cgit
In lighttpd, I've ensured that the cache dir for cgit is created if cgit
is enabled.
apparmor's systemd service wasn't working when multiple profiles were
defined, due to the ExecStart commands in the service file being
broken into multiple lines, instead of being separated by ';'.