which NixOS should be built. This is useful in NixOS network
specifications, because it allows machines in the network to have
different types, e.g.,
{
machine1 =
{ config, pkgs, ... }:
{ nixpkgs.system = "i686-linux";
... other config ...
};
machine2 =
{ config, pkgs, ... }:
{ nixpkgs.system = "x86_64-linux";
... other config ...
};
}
It can also be useful in distributed NixOS tests.
svn path=/nixos/trunk/; revision=24823
problem is that configuration values below a mkIf are evaluated
strictly even if the condition is false. Thus "${luksRoot}" causes
an evaluation error. As a workaround, use the empty string instead
of `null' as the default value. However, we should really fix the
laziness of mkIf. It's likely that NixOS evaluation would be much
faster if it didn't have to evaluate disabled configuration values.
svn path=/nixos/trunk/; revision=24477
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
default.
It does not look very modular, and the manual may not look very good, but I think it
works better than before. And setting cron.enable = false and fcron.enable = true works fine.
svn path=/nixos/trunk/; revision=24199
{
test1 = {pkgs, config, ...}:
{
# NixOS config of machine test1
...
};
test2 = {pkgs, config, ...}:
{
# NixOS config of machine test2
...
};
}
And an infrastructure expression, e.g:
{
test1 = {
hostName = "test1.example.org";
system = "i686-linux";
};
test2 = {
hostName = "test2.example.org";
system = "x86_64-linux";
};
}
And by executing:
nixos-deploy-network -n network.nix -i infrastructure.nix
The system configurations in the network expression are built, transferred to the machines in the network and finally activated.
svn path=/nixos/trunk/; revision=24146
in /etc/xen/auto at boot time, to save all running domains during
shutdown, and to restore all saved domains at boot time.
svn path=/nixos/trunk/; revision=24121