grub.cfg before the menu entries. (This could also be done using
`extraEntriesBeforeNixOS', but then you can't have entries *after*
the main entry anymore.)
* In the installer test, redirect GRUB output to the serial port.
svn path=/nixos/branches/boot-order/; revision=22300
shutdown. (Portmap and statd are needed during shutdown to unmount
NFS volumes but have open files in /var/run.)
* In the shutdown job, don't kill PIDs belonging to Upstart jobs that
are still running. If they don't stop on the "starting shutdown"
event, then they're needed during shutdown (such as portmap and
statd).
* NFS test: test whether the shutdown quickly unmounts NFS volumes
(i.e. whether portmap and statd are still running).
svn path=/nixos/branches/boot-order/; revision=22204
function argument, so that the test script can refer to computed
values such as the assigned IP addresses of the virtual machines.
svn path=/nixos/trunk/; revision=21939
interface name through the derived option networking.ifaces. This
makes it easier to get information about specific interfaces
(e.g. `nodes.router.config.networking.ifaces.eth2.ipAddress').
Really networking.interfaces should be an attribute set.
svn path=/nixos/trunk/; revision=21938
behind a NAT router and verifying that another client can connect to
it through the NAT (using a UPnP-IGD mapping created automatically
by miniupnpd).
svn path=/nixos/trunk/; revision=21932
machine can now declare an option `virtualisation.vlans' that causes
it to have network interfaces connected to each listed virtual
network. For instance,
virtualisation.vlans = [ 1 2 ];
causes the machine to have two interfaces (in addition to eth0, used
by the test driver to control the machine): eth1 connected to
network 1 with IP address 192.168.1.<i>, and eth2 connected to
network 2 with address 192.168.2.<i> (where <i> is the index of the
machine in the `nodes' attribute set). On the other hand,
virtualisation.vlans = [ 2 ];
causes the machine to only have an eth1 connected to network 2 with
address 192.168.2.<i>. So each virtual network <n> is assigned the
IP range 192.168.<n>.0/24.
Each virtual network is implemented using a separate multicast
address on the host, so guests really cannot talk to networks to
which they are not connected.
* Added a simple NAT test to demonstrate this.
* Added an option `virtualisation.qemu.options' to specify QEMU
command-line options. Used to factor out some commonality between
the test driver script and the interactive test script.
svn path=/nixos/trunk/; revision=21928
console. This uses the `sendkey' command in the QEMU monitor.
* For the block/unblock primitives, use the `set_link' command in the
QEMU monitor.
svn path=/nixos/trunk/; revision=19854
account of the VM. However, it doesn't work yet (the machine
doesn't boot properly and there is no console output). So use a
hard-coded password for now (very dangerous!).
svn path=/nixos/trunk/; revision=19589
verify whether the reverse proxy works correctly if the back-ends go
down and come up. (Moved from the varia repo.)
svn path=/nixos/trunk/; revision=19356
be necessary, because waitForJob shouldn't return until Postgres is
up and running, but we still get errors like this:
postgresql: running command: initctl status postgresql
postgresql: exit status 0
postgresql: running command: createdb trac
postgresql# createdb: could not connect to database postgres: FATAL: the database system is starting up
postgresql: exit status 1
svn path=/nixos/trunk/; revision=19329
failures like this:
machine: running command: parted /dev/vda -- mkpart primary 1M 2048M
machine: exit status 0
machine: running command: parted /dev/vda -- set 1 lvm on
machine: exit status 1
machine: output:
Warning: WARNING: the kernel failed to re-read the partition table on /dev/vda
(Device or resource busy). As a result, it may not reflect all of your changes
until after reboot.
command `parted /dev/vda -- set 1 lvm on' did not succeed (exit code 1) at Machine.pm line 212, <GEN2> line 24.
svn path=/nixos/trunk/; revision=19328
is done by instantiating a webserver that simulates nixos.org.
Using nix-push we create a channel that contains some stuff (namely
the GNU Hello source tarball and the rlwrap program). This was a
bit tricky because nix-push requires a writable Nix store. Using
AUFS this is possible, but not on recent Linux kernels (AUFS1 over
CIFS fails).
svn path=/nixos/trunk/; revision=19327
automatically. This is mostly useful for testing. (KDM also has
this feature, but it's nice not to depend on KDE for non-KDE tests.)
svn path=/nixos/trunk/; revision=19239
* Factored out some commonality between tests to make them a bit
simpler to write. A test is a function { pkgs, ... }: -> { nodes,
testScript } or { machine, testScript }. So it's no longer
necessary to have a "vms" attribute in every test.
svn path=/nixos/trunk/; revision=19220
It fails because nixbldN don't belong to the nixbld group
Manually removing socket file. Somehow the socket is not always created
when rebooting the second time (?) I have to look into that later.
svn path=/nixos/trunk/; revision=18984
expose makeInfo (used by test now)
expose config hack
* Adding tests to release.nix
* fixes
* removing dependency on perl
refactoring details:
Move all configuration modules used by the NixOS installation test script
into one directory.
svn path=/nixos/trunk/; revision=18982
You can run the kvm nixos installation test by:
nix-build --no-out-link tests/test-nixos-install-from-cd.nix
It boots the installed system.
It still fails sshd isn't started (yet)
adding nixos-bootstrapping-archive:
You can install NixOS easily using any live cd now.
See README-BOOTSTRAP-NIXOS
svn path=/nixos/trunk/; revision=18950
qemu_kvm. Installation doesn't take place yet. VM is started
printing a remote controlled "Hello".
This serves as example how to run a vm within a bulid job.
svn path=/nixos/trunk/; revision=18887