machine can now declare an option `virtualisation.vlans' that causes
it to have network interfaces connected to each listed virtual
network. For instance,
virtualisation.vlans = [ 1 2 ];
causes the machine to have two interfaces (in addition to eth0, used
by the test driver to control the machine): eth1 connected to
network 1 with IP address 192.168.1.<i>, and eth2 connected to
network 2 with address 192.168.2.<i> (where <i> is the index of the
machine in the `nodes' attribute set). On the other hand,
virtualisation.vlans = [ 2 ];
causes the machine to only have an eth1 connected to network 2 with
address 192.168.2.<i>. So each virtual network <n> is assigned the
IP range 192.168.<n>.0/24.
Each virtual network is implemented using a separate multicast
address on the host, so guests really cannot talk to networks to
which they are not connected.
* Added a simple NAT test to demonstrate this.
* Added an option `virtualisation.qemu.options' to specify QEMU
command-line options. Used to factor out some commonality between
the test driver script and the interactive test script.
svn path=/nixos/trunk/; revision=21928
console. This uses the `sendkey' command in the QEMU monitor.
* For the block/unblock primitives, use the `set_link' command in the
QEMU monitor.
svn path=/nixos/trunk/; revision=19854
account of the VM. However, it doesn't work yet (the machine
doesn't boot properly and there is no console output). So use a
hard-coded password for now (very dangerous!).
svn path=/nixos/trunk/; revision=19589
verify whether the reverse proxy works correctly if the back-ends go
down and come up. (Moved from the varia repo.)
svn path=/nixos/trunk/; revision=19356
be necessary, because waitForJob shouldn't return until Postgres is
up and running, but we still get errors like this:
postgresql: running command: initctl status postgresql
postgresql: exit status 0
postgresql: running command: createdb trac
postgresql# createdb: could not connect to database postgres: FATAL: the database system is starting up
postgresql: exit status 1
svn path=/nixos/trunk/; revision=19329
failures like this:
machine: running command: parted /dev/vda -- mkpart primary 1M 2048M
machine: exit status 0
machine: running command: parted /dev/vda -- set 1 lvm on
machine: exit status 1
machine: output:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda
(Device or resource busy). As a result, it may not reflect all of your changes
until after reboot.
command `parted /dev/vda -- set 1 lvm on' did not succeed (exit code 1) at Machine.pm line 212, <GEN2> line 24.
svn path=/nixos/trunk/; revision=19328
is done by instantiating a webserver that simulates nixos.org.
Using nix-push we create a channel that contains some stuff (namely
the GNU Hello source tarball and the rlwrap program). This was a
bit tricky because nix-push requires a writable Nix store. Using
AUFS this is possible, but not on recent Linux kernels (AUFS1 over
CIFS fails).
svn path=/nixos/trunk/; revision=19327
automatically. This is mostly useful for testing. (KDM also has
this feature, but it's nice not to depend on KDE for non-KDE tests.)
svn path=/nixos/trunk/; revision=19239
* Factored out some commonality between tests to make them a bit
simpler to write. A test is a function { pkgs, ... }: -> { nodes,
testScript } or { machine, testScript }. So it's no longer
necessary to have a "vms" attribute in every test.
svn path=/nixos/trunk/; revision=19220
It fails because nixbldN don't belong to the nixbld group
Manually removing socket file. Somehow the socket is not always created
when rebooting the second time (?) I have to look into that later.
svn path=/nixos/trunk/; revision=18984
expose makeInfo (used by test now)
expose config hack
* Adding tests to release.nix
* fixes
* removing dependency on perl
refactoring details:
Move all configuration modules used by the NixOS installation test script
into one directory.
svn path=/nixos/trunk/; revision=18982
You can run the kvm nixos installation test by:
nix-build --no-out-link tests/test-nixos-install-from-cd.nix
It boots the installed system.
It still fails sshd isn't started (yet)
adding nixos-bootstrapping-archive:
You can install NixOS easily using any live cd now.
See README-BOOTSTRAP-NIXOS
svn path=/nixos/trunk/; revision=18950
qemu_kvm. Installation doesn't take place yet. VM is started
printing a remote controlled "Hello".
This serves as example how to run a vm within a bulid job.
svn path=/nixos/trunk/; revision=18887
- NFS mounts are created during startup time
- Played a bit with waiting times in order to capture a nice screenshot
svn path=/nixos/trunk/; revision=16964
3 server and two clients. The clients connect to the server and do
nothing (except getting blown by the bots). After a few seconds we
verify that the clients indeed connected successfully, and make a
screenshot of the X displays of the clients.
svn path=/nixos/trunk/; revision=16951
lib/build-vms.nix contains a function `buildVirtualNetwork' that
takes a specification of a network of machines (as an attribute set
of NixOS machine configurations) and builds a script that starts
each configuration in a separate QEMU/KVM VM and connects them
together in a virtual network. This script can be run manually to
test the VMs interactively. There is also a function `runTests'
that starts and runs the virtual network in a derivation, and
then executes a test specification that tells the VMs to do certain
things (i.e., letting one VM send an HTTP request to a webserver on
another VM). The tests are written in Perl (for now).
tests/subversion.nix shows a simple example, namely a network of two
machines: a webserver that runs the Subversion subservice, and a
client. Apache, Subversion and a few other packages are built with
coverage analysis instrumentation. For instance,
$ nix-build tests/subversion.nix -A vms
$ ./result/bin/run-vms
starts two QEMU/KVM instances. When they have finished booting, the
webserver can be accessed from the host through
http://localhost:8081/.
It also has a small test suite:
$ nix-build tests/subversion.nix -A report
This runs the VMs in a derivation, runs the tests, and then produces
a distributed code coverage analysis report (i.e. it shows the
combined coverage on both machines).
The Perl test driver program is in lib/test-driver. It executes
commands on the guest machines by connecting to a root shell running
on port 514 (provided by modules/testing/test-instrumentation.nix).
The VMs are connected together in a virtual network using QEMU's
multicast feature. This isn't very secure. At the very least,
other processes on the same machine can listen to or send packets on
the virtual network. On the plus side, we don't need to be root to
set up a multicast virtual network, so we can do it from a
derivation. Maybe we can use VDE instead.
(Moved from the vario repository.)
svn path=/nixos/trunk/; revision=16899