Flushing is bad if the Nix store is on a remote filesystem accessed
over that interface.
http://hydra.nixos.org/build/3184162
Also added a interface option ‘prefixLength’ as a better alternative
to ‘subnetMask’.
Using BusyBox instead of Bash plus a bunch of other tools gives us a
much more feature-full, yet smaller initrd. In particular, BusyBox
contains networking commands such as ip and a DHCP client, useful for
NFS boots. It's also much more convenient for rescue situations
because the shell has builtin readline support and there are many more
tools (including vi).
After the change from revision 30103, nixos-rebuild suddenly consumed
freaky amounts of memory. I had to abort the process after it had
allocated well in excess of 30GB(!) of RAM. I'm not sure what is causing
this behavior, but undoing that assignment fixes the problem. The other
two commits needed to be revoked, too, because they depend on 30103.
svn path=/nixos/trunk/; revision=30127
This has the advantage that it doesn't depend on networking being
up.
* Move common QEMU/KVM guest configuration to profiles/qemu-guest.nix.
svn path=/nixos/trunk/; revision=26421
now kills its process group when it exits. Without setsid, this
ends up killing the parent (i.e., the builder).
* Use port 445 instead of 139 because the CIFS kernel module tries
port 445 first. If there is an actual Samba running on the host, it
would end up connecting to that one instead of our own and fail.
svn path=/nixos/trunk/; revision=25016
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
init script. This removes the need for the `systemConfig' boot
parameter; `init=<stage-2-init>' is enough. However, the GRUB menu
builder still needs to add `systemConfig' to the kernel command line
for compatibility with old configurations.
svn path=/nixos/trunk/; revision=23775
like `build-vm', but boots using the regular boot loader (i.e. GRUB
1 or 2) rather than booting directly from the kernel/initrd. Thus
it allows testing of GRUB.
svn path=/nixos/trunk/; revision=23747
higher than 800x600 work.
* Add a "Monitor" statement to the "Screen" section, because otherwise
the Monitor section is ignored.
svn path=/nixos/trunk/; revision=23068
guest connect to a Unix domain socket on the host rather than the
other way around. The former is a QEMU feature (guestfwd to a
socket) while the latter requires a patch (which we can now get rid
of).
svn path=/nixos/branches/boot-order/; revision=22331
caching. This makes a huge performance difference (e.g. from 4 MB/s
`dd' throughput to 140 MB/s on the Hydra machines). As the QEMU
manual says: "Some block drivers perform badly with
‘cache=writethrough’, most notably, qcow2."
svn path=/nixos/branches/boot-order/; revision=22248
When starting multiple VMs, some will have perfectly synchronised
clocks, while others will have their clocks run much slower (say, a
factor of 5).
svn path=/nixos/branches/boot-order/; revision=22195
to use the standard (coreutils) tools.
* Use util-linux's `switch_root' to switch over to the target root
FS. It automatically moves over the /dev, /proc and /sys from stage
1, so stage 2 doesn't need to set them up again.
svn path=/nixos/trunk/; revision=22085