If you want minidla to accept connections from the rest of the world, please
add
networking.firewall.allowedTCPPorts = [ 8200 ];
networking.firewall.allowedUDPPorts = [ 1900 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
If you want x11vnc to receive TCP connections from the rest of the world,
please add
networking.firewall.allowedTCPPorts = [ 5900 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
If you want CUPS to receive UDP printer announcements from the rest of the
world, please add
networking.firewall.allowedUDPPorts = [ 631 ];
to /etc/nixos/configuration.nix.
See <http://lists.science.uu.nl/pipermail/nix-dev/2013-November/011997.html>
for the discussion that lead to this.
ntopng is a high-speed web-based traffic analysis and flow collection
tool. Enable it by adding this to configuration.nix:
services.ntopng.enable = true;
Open a browser at http://localhost:3000 and login with the default
username/password: admin/admin.
Postgres was taking a long time to shutdown. This is because we were
sending SIGINT to all processes, apparently confusing the autovacuum
launcher. Instead it should only be sent to the main process (which
takes care of shutting down the others).
The downside is that systemd will also send the final SIGKILL only to
the main process, so other processes in the cgroup may be left behind.
There should be an option for this...
libvirtd puts the full path of the emulator binary in the machine config
file. But this path can unfortunately be garbage collected while still
being used by the virtual machine. Then this happens:
Error starting domain: Cannot check QEMU binary /nix/store/z5c2xzk9x0pj6x511w0w4gy9xl5wljxy-qemu-1.5.2-x86-only/bin/qemu-kvm: No such file or directory
Fix by updating the emulator path on each service startup to something
valid (re-scan $PATH).
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.