This has the advantage that it doesn't depend on networking being
up.
* Move common QEMU/KVM guest configuration to profiles/qemu-guest.nix.
svn path=/nixos/trunk/; revision=26421
since the latter is rather deprecated and has been unmaintained
since 2001. Note that "ip" doesn't know about classful addressing,
so you can no longer get away with not specifying the subnet mask
for explicitly configured interfaces. So if you had
networking.interfaces =
[ { name = "eth0"; ipAddress = "192.168.1.1"; } ];
this should be changed to
networking.interfaces =
[ { name = "eth0";
ipAddress = "192.168.1.1";
subnetMask = "255.255.255.0";
}
];
otherwise you end up with a subnet mask of 255.255.255.255.
svn path=/nixos/trunk/; revision=26279
so the startup synchronisation didn't work, causing spurious QEMU
failures ("Device 'vde' could not be initialized").
svn path=/nixos/trunk/; revision=26055
to run the VMs of a test. Instead, you can do
$ nix-build tests -A foo.driver
$ ./result/bin/nixos-run-vms
This uses the test driver infrastructure, which is necessary in
order to set up the VDE switches.
svn path=/nixos/trunk/; revision=25586
together into virtual networks. This has several advantages:
- It's more secure because the QEMU instances use Unix domain
sockets to talk to the switch.
- It doesn't depend on the host's network interfaces. (Local
multicast fails if there is no default gateway, so for instance it
fails if a laptop is not connected to any network.)
- VDE devices can be connected together to form arbitrary network
topologies.
- VDE has a "wirefilter" tool to emulate delays and packet loss,
which are useful for network testing.
svn path=/nixos/trunk/; revision=25526
is now made available in the interactive test driver. For instance,
you can do
$ nix-build tests/ -A quake3.driver
$ ./result/bin/nixos-test-driver
> eval $ENV{'testScript'};
... see VMs + X11 + Quake get started, bots running around ...
>
So after this you can run commands interactively on the VMs in the
state they were in after the conclusion of the test script.
svn path=/nixos/trunk/; revision=25158
interactively on a network specification. For instance:
$ nix-build tests/ -A quake3.driver
$ ./result/bin/nixos-test-driver
> startAll;
client1: starting vm
client1: QEMU running (pid 14971)
server: starting vm
server: QEMU running (pid 14982)
...
> $client1->execute("quake3 ...");
* Use the GNU readline library in interactive mode.
svn path=/nixos/trunk/; revision=25156
It turns out that all network interfaces in all VMs had the same
Ethernet address (52:54:00:12:34:56) because we didn't specify any
with the macaddr=... option. This can obviously lead to great
confusion. For instance, when a router forwards a packet, it can
actually end up sending the packet to itself because the target
machine has the same Ethernet address (causing a loop until the TTL
expires), while the target *also* receives the packet. It's amazing
anything worked at all, really.
So now we just set the Ethernet addresses to 52:54:00:12:<virtual
network number>:<machine number>.
svn path=/nixos/trunk/; revision=25020
which NixOS should be built. This is useful in NixOS network
specifications, because it allows machines in the network to have
different types, e.g.,
{
machine1 =
{ config, pkgs, ... }:
{ nixpkgs.system = "i686-linux";
... other config ...
};
machine2 =
{ config, pkgs, ... }:
{ nixpkgs.system = "x86_64-linux";
... other config ...
};
}
It can also be useful in distributed NixOS tests.
svn path=/nixos/trunk/; revision=24823
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410