were obtained from the NixOS channel. "nixos-install" copies this
to the installed system as well.
* In the installation CD, set GC_INITIAL_HEAP_SIZE to a low value for
the benefit of memory-constrained environments.
svn path=/nixos/trunk/; revision=33887
two minutes on my laptop. Note that due to sparse allocation, raw
disks don't actually take up more space than qcow2 disks (and
they're temporary anyway).
svn path=/nixos/trunk/; revision=33746
nfsd, as suggested by the nfs-utils README.
Also, rather than relying on Upstart events (which have all sorts of
problems, especially if you have jobs that have multiple
dependencies), we know just let jobs start their on prerequisites.
That is, nfsd starts mountd in its preStart script; mountd starts
statd; statd starts portmap. Likewise, mountall starts statd to
ensure that it can mount NFS filesystems. This means that doing
something like "start nfsd" from the command line will Do The Right
Thing and start the dependencies of nfsd.
svn path=/nixos/trunk/; revision=33172
TCP port on the guest. Useful during testing (e.g. to access a web
server in the guest through a web browser on the host).
svn path=/nixos/trunk/; revision=26987
This has the advantage that it doesn't depend on networking being
up.
* Move common QEMU/KVM guest configuration to profiles/qemu-guest.nix.
svn path=/nixos/trunk/; revision=26421
since the latter is rather deprecated and has been unmaintained
since 2001. Note that "ip" doesn't know about classful addressing,
so you can no longer get away with not specifying the subnet mask
for explicitly configured interfaces. So if you had
networking.interfaces =
[ { name = "eth0"; ipAddress = "192.168.1.1"; } ];
this should be changed to
networking.interfaces =
[ { name = "eth0";
ipAddress = "192.168.1.1";
subnetMask = "255.255.255.0";
}
];
otherwise you end up with a subnet mask of 255.255.255.255.
svn path=/nixos/trunk/; revision=26279
so the startup synchronisation didn't work, causing spurious QEMU
failures ("Device 'vde' could not be initialized").
svn path=/nixos/trunk/; revision=26055
to run the VMs of a test. Instead, you can do
$ nix-build tests -A foo.driver
$ ./result/bin/nixos-run-vms
This uses the test driver infrastructure, which is necessary in
order to set up the VDE switches.
svn path=/nixos/trunk/; revision=25586
together into virtual networks. This has several advantages:
- It's more secure because the QEMU instances use Unix domain
sockets to talk to the switch.
- It doesn't depend on the host's network interfaces. (Local
multicast fails if there is no default gateway, so for instance it
fails if a laptop is not connected to any network.)
- VDE devices can be connected together to form arbitrary network
topologies.
- VDE has a "wirefilter" tool to emulate delays and packet loss,
which are useful for network testing.
svn path=/nixos/trunk/; revision=25526
is now made available in the interactive test driver. For instance,
you can do
$ nix-build tests/ -A quake3.driver
$ ./result/bin/nixos-test-driver
> eval $ENV{'testScript'};
... see VMs + X11 + Quake get started, bots running around ...
>
So after this you can run commands interactively on the VMs in the
state they were in after the conclusion of the test script.
svn path=/nixos/trunk/; revision=25158
interactively on a network specification. For instance:
$ nix-build tests/ -A quake3.driver
$ ./result/bin/nixos-test-driver
> startAll;
client1: starting vm
client1: QEMU running (pid 14971)
server: starting vm
server: QEMU running (pid 14982)
...
> $client1->execute("quake3 ...");
* Use the GNU readline library in interactive mode.
svn path=/nixos/trunk/; revision=25156