Thus
networking.interfaces = [ { name = "eth0"; ipAddress = "192.168.15.1"; } ];
can now be written as
networking.interfaces.eth0.ipAddress = "192.168.15.1";
The old notation still works though.
since the latter is rather deprecated and has been unmaintained
since 2001. Note that "ip" doesn't know about classful addressing,
so you can no longer get away with not specifying the subnet mask
for explicitly configured interfaces. So if you had
networking.interfaces =
[ { name = "eth0"; ipAddress = "192.168.1.1"; } ];
this should be changed to
networking.interfaces =
[ { name = "eth0";
ipAddress = "192.168.1.1";
subnetMask = "255.255.255.0";
}
];
otherwise you end up with a subnet mask of 255.255.255.255.
svn path=/nixos/trunk/; revision=26279
to run the VMs of a test. Instead, you can do
$ nix-build tests -A foo.driver
$ ./result/bin/nixos-run-vms
This uses the test driver infrastructure, which is necessary in
order to set up the VDE switches.
svn path=/nixos/trunk/; revision=25586
It turns out that all network interfaces in all VMs had the same
Ethernet address (52:54:00:12:34:56) because we didn't specify any
with the macaddr=... option. This can obviously lead to great
confusion. For instance, when a router forwards a packet, it can
actually end up sending the packet to itself because the target
machine has the same Ethernet address (causing a loop until the TTL
expires), while the target *also* receives the packet. It's amazing
anything worked at all, really.
So now we just set the Ethernet addresses to 52:54:00:12:<virtual
network number>:<machine number>.
svn path=/nixos/trunk/; revision=25020
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
function argument, so that the test script can refer to computed
values such as the assigned IP addresses of the virtual machines.
svn path=/nixos/trunk/; revision=21939
machine can now declare an option `virtualisation.vlans' that causes
it to have network interfaces connected to each listed virtual
network. For instance,
virtualisation.vlans = [ 1 2 ];
causes the machine to have two interfaces (in addition to eth0, used
by the test driver to control the machine): eth1 connected to
network 1 with IP address 192.168.1.<i>, and eth2 connected to
network 2 with address 192.168.2.<i> (where <i> is the index of the
machine in the `nodes' attribute set). On the other hand,
virtualisation.vlans = [ 2 ];
causes the machine to only have an eth1 connected to network 2 with
address 192.168.2.<i>. So each virtual network <n> is assigned the
IP range 192.168.<n>.0/24.
Each virtual network is implemented using a separate multicast
address on the host, so guests really cannot talk to networks to
which they are not connected.
* Added a simple NAT test to demonstrate this.
* Added an option `virtualisation.qemu.options' to specify QEMU
command-line options. Used to factor out some commonality between
the test driver script and the interactive test script.
svn path=/nixos/trunk/; revision=21928
console. This uses the `sendkey' command in the QEMU monitor.
* For the block/unblock primitives, use the `set_link' command in the
QEMU monitor.
svn path=/nixos/trunk/; revision=19854
* Factored out some commonality between tests to make them a bit
simpler to write. A test is a function { pkgs, ... }: -> { nodes,
testScript } or { machine, testScript }. So it's no longer
necessary to have a "vms" attribute in every test.
svn path=/nixos/trunk/; revision=19220
feature is hard to maintain and because this a potential source of error.
Imports are only accepted inside named modules where the system has some
control over mutual inclusion.
svn path=/nixos/trunk/; revision=17143
lib/build-vms.nix contains a function `buildVirtualNetwork' that
takes a specification of a network of machines (as an attribute set
of NixOS machine configurations) and builds a script that starts
each configuration in a separate QEMU/KVM VM and connects them
together in a virtual network. This script can be run manually to
test the VMs interactively. There is also a function `runTests'
that starts and runs the virtual network in a derivation, and
then executes a test specification that tells the VMs to do certain
things (i.e., letting one VM send an HTTP request to a webserver on
another VM). The tests are written in Perl (for now).
tests/subversion.nix shows a simple example, namely a network of two
machines: a webserver that runs the Subversion subservice, and a
client. Apache, Subversion and a few other packages are built with
coverage analysis instrumentation. For instance,
$ nix-build tests/subversion.nix -A vms
$ ./result/bin/run-vms
starts two QEMU/KVM instances. When they have finished booting, the
webserver can be accessed from the host through
http://localhost:8081/.
It also has a small test suite:
$ nix-build tests/subversion.nix -A report
This runs the VMs in a derivation, runs the tests, and then produces
a distributed code coverage analysis report (i.e. it shows the
combined coverage on both machines).
The Perl test driver program is in lib/test-driver. It executes
commands on the guest machines by connecting to a root shell running
on port 514 (provided by modules/testing/test-instrumentation.nix).
The VMs are connected together in a virtual network using QEMU's
multicast feature. This isn't very secure. At the very least,
other processes on the same machine can listen to or send packets on
the virtual network. On the plus side, we don't need to be root to
set up a multicast virtual network, so we can do it from a
derivation. Maybe we can use VDE instead.
(Moved from the vario repository.)
svn path=/nixos/trunk/; revision=16899