This makes it possible to build several NixOS systems that use different
nixpkgs in the same nix-build invocation. Today, you can't do that since
the <nixpkgs> path reference is hard-coded in lib/eval-config.nix.
Thus
networking.interfaces = [ { name = "eth0"; ipAddress = "192.168.15.1"; } ];
can now be written as
networking.interfaces.eth0.ipAddress = "192.168.15.1";
The old notation still works though.
GRUB 2 doesn't want to boot off a LVM disk:
machine# installing the GRUB 2 boot loader on /dev/vda...
machine# Path `/boot/grub' is not readable by GRUB on boot. Installation is impossible. Aborting.
machine# /nix/store/7yc535h1lim1a5gkhjb3fr6c8193dv8w-install-grub.pl: installation of GRUB on /dev/vda failed
In theory GRUB 2 supports booting from LVM, but we probably need to
generate the right grub.conf (see
https://wiki.archlinux.org/index.php/GRUB2#LVM).
http://hydra.nixos.org/build/2904680
were obtained from the NixOS channel. "nixos-install" copies this
to the installed system as well.
* In the installation CD, set GC_INITIAL_HEAP_SIZE to a low value for
the benefit of memory-constrained environments.
svn path=/nixos/trunk/; revision=33887
two minutes on my laptop. Note that due to sparse allocation, raw
disks don't actually take up more space than qcow2 disks (and
they're temporary anyway).
svn path=/nixos/trunk/; revision=33746
nfsd, as suggested by the nfs-utils README.
Also, rather than relying on Upstart events (which have all sorts of
problems, especially if you have jobs that have multiple
dependencies), we know just let jobs start their on prerequisites.
That is, nfsd starts mountd in its preStart script; mountd starts
statd; statd starts portmap. Likewise, mountall starts statd to
ensure that it can mount NFS filesystems. This means that doing
something like "start nfsd" from the command line will Do The Right
Thing and start the dependencies of nfsd.
svn path=/nixos/trunk/; revision=33172
TCP port on the guest. Useful during testing (e.g. to access a web
server in the guest through a web browser on the host).
svn path=/nixos/trunk/; revision=26987
This has the advantage that it doesn't depend on networking being
up.
* Move common QEMU/KVM guest configuration to profiles/qemu-guest.nix.
svn path=/nixos/trunk/; revision=26421
since the latter is rather deprecated and has been unmaintained
since 2001. Note that "ip" doesn't know about classful addressing,
so you can no longer get away with not specifying the subnet mask
for explicitly configured interfaces. So if you had
networking.interfaces =
[ { name = "eth0"; ipAddress = "192.168.1.1"; } ];
this should be changed to
networking.interfaces =
[ { name = "eth0";
ipAddress = "192.168.1.1";
subnetMask = "255.255.255.0";
}
];
otherwise you end up with a subnet mask of 255.255.255.255.
svn path=/nixos/trunk/; revision=26279
so the startup synchronisation didn't work, causing spurious QEMU
failures ("Device 'vde' could not be initialized").
svn path=/nixos/trunk/; revision=26055
to run the VMs of a test. Instead, you can do
$ nix-build tests -A foo.driver
$ ./result/bin/nixos-run-vms
This uses the test driver infrastructure, which is necessary in
order to set up the VDE switches.
svn path=/nixos/trunk/; revision=25586
together into virtual networks. This has several advantages:
- It's more secure because the QEMU instances use Unix domain
sockets to talk to the switch.
- It doesn't depend on the host's network interfaces. (Local
multicast fails if there is no default gateway, so for instance it
fails if a laptop is not connected to any network.)
- VDE devices can be connected together to form arbitrary network
topologies.
- VDE has a "wirefilter" tool to emulate delays and packet loss,
which are useful for network testing.
svn path=/nixos/trunk/; revision=25526