This is achieved by having multiple lines per storage file, one for each user (if the feature is enabled); each of these
lines has the same format as would be the case for the userless authentication, except that they are prepended with a
SHA-512 of the user's id.
'YubiKey Integration for Full Disk Encryption Pre-Boot Authentication (Copyright) Yubico, 2011 Version: 1.1'.
Used binaries:
* uuidgen - for generation of random sequence numbers
* ykchalresp - for challenging a Yubikey
* ykinfo - to check if a Yubikey is plugged in at boot (fallback to passphrase authentication otherwise)
* openssl - for calculation of SHA-1, HMAC-SHA-1, as well as AES-256-CTR (de/en)cryption
Main differences to the specification mentioned above:
* No user management (yet), only one password+yubikey per LUKS device
* SHA-512 instead of CRC-16 for checksum
Main differences to the previous implementation:
* Instead of changing the key slot of the LUKS device each boot,
the actual key for the LUKS device will be encrypted itself
* Since the response for the new challenge is now calculated
locally with openssl, the MITM-USB-attack with which previously
an attacker could obtain the new response (that was used as the new
encryption key for the LUKS device) by listening to the
Yubikey has ideally become useless (as long as uuidgen can
successfuly generate new random sequence numbers).
Remarks:
* This is not downwards compatible to the previous implementation
This will allow overriding package-provided units, or overriding only a
specific instance of a unit template.
Signed-off-by: Shea Levy <shea@shealevy.com>
This required some changes to systemd unit handling:
* Add an option to specify that a unit is just a symlink
* Allow specified units to overwrite systemd-provided ones
* Have gettys.target require autovt@1.service instead of getty@1.service
Signed-off-by: Shea Levy <shea@shealevy.com>
You can now say:
systemd.containers.foo.config =
{ services.openssh.enable = true;
services.openssh.ports = [ 2022 ];
users.extraUsers.root.openssh.authorizedKeys.keys = [ "ssh-dss ..." ];
};
which defines a NixOS instance with the given configuration running
inside a lightweight container.
You can also manage the configuration of the container independently
from the host:
systemd.containers.foo.path = "/nix/var/nix/profiles/containers/foo";
where "path" is a NixOS system profile. It can be created/updated by
doing:
$ nix-env --set -p /nix/var/nix/profiles/containers/foo \
-f '<nixos>' -A system -I nixos-config=foo.nix
The container configuration (foo.nix) should define
boot.isContainer = true;
to optimise away the building of a kernel and initrd. This is done
automatically when using the "config" route.
On the host, a lightweight container appears as the service
"container-<name>.service". The container is like a regular NixOS
(virtual) machine, except that it doesn't have its own kernel. It has
its own root file system (by default /var/lib/containers/<name>), but
shares the Nix store of the host (as a read-only bind mount). It also
has access to the network devices of the host.
Currently, if the configuration of the container changes, running
"nixos-rebuild switch" on the host will cause the container to be
rebooted. In the future we may want to send some message to the
container so that it can activate the new container configuration
without rebooting.
Containers are not perfectly isolated yet. In particular, the host's
/sys/fs/cgroup is mounted (writable!) in the guest.