its default behaviour is to stop the emulator (i.e. suspend the VM).
For automated tests, this is bad, because is makes the VM appear to
hang without any error message. The "werror=report" flag causes
QEMU to report the problem to the VM. As a side effect QEMU exits
very elegantly:
[ 2.308668] end_request: I/O error, dev vda, sector 534400
[ 2.309611] Buffer I/O error on device vda, logical block 66800
...
*** glibc detected *** /nix/store/yhngqrww53j0aw7z7v4bv948x5g5fc3d-qemu-kvm-0.12.1.2/bin/qemu-system-x86_64: double free or corruption (!prev): 0x08e3e040 ***
Aborted
So I guess we now depend on a bug in QEMU :-)
svn path=/nixos/trunk/; revision=19703
* Factored out some commonality between tests to make them a bit
simpler to write. A test is a function { pkgs, ... }: -> { nodes,
testScript } or { machine, testScript }. So it's no longer
necessary to have a "vms" attribute in every test.
svn path=/nixos/trunk/; revision=19220
write some magic string to ttyS0. This removes the dependency on
having a CIFS mount.
* Use a thread to process the stdout/stderr of each QEMU instance.
* Add a kernel command line parameter "stage1panic" to tell stage 1 to
panic if an error occurs. This is faster than waiting until
connect() times out.
svn path=/nixos/trunk/; revision=19212
feature is hard to maintain and because this a potential source of error.
Imports are only accepted inside named modules where the system has some
control over mutual inclusion.
svn path=/nixos/trunk/; revision=17143
lib/build-vms.nix contains a function `buildVirtualNetwork' that
takes a specification of a network of machines (as an attribute set
of NixOS machine configurations) and builds a script that starts
each configuration in a separate QEMU/KVM VM and connects them
together in a virtual network. This script can be run manually to
test the VMs interactively. There is also a function `runTests'
that starts and runs the virtual network in a derivation, and
then executes a test specification that tells the VMs to do certain
things (i.e., letting one VM send an HTTP request to a webserver on
another VM). The tests are written in Perl (for now).
tests/subversion.nix shows a simple example, namely a network of two
machines: a webserver that runs the Subversion subservice, and a
client. Apache, Subversion and a few other packages are built with
coverage analysis instrumentation. For instance,
$ nix-build tests/subversion.nix -A vms
$ ./result/bin/run-vms
starts two QEMU/KVM instances. When they have finished booting, the
webserver can be accessed from the host through
http://localhost:8081/.
It also has a small test suite:
$ nix-build tests/subversion.nix -A report
This runs the VMs in a derivation, runs the tests, and then produces
a distributed code coverage analysis report (i.e. it shows the
combined coverage on both machines).
The Perl test driver program is in lib/test-driver. It executes
commands on the guest machines by connecting to a root shell running
on port 514 (provided by modules/testing/test-instrumentation.nix).
The VMs are connected together in a virtual network using QEMU's
multicast feature. This isn't very secure. At the very least,
other processes on the same machine can listen to or send packets on
the virtual network. On the plus side, we don't need to be root to
set up a multicast virtual network, so we can do it from a
derivation. Maybe we can use VDE instead.
(Moved from the vario repository.)
svn path=/nixos/trunk/; revision=16899
into one argument "modules".
* release.nix: fixed the manual job.
* ISO generation: break an infinite recursion. Don't know why this
suddenly happens. Probably because of the nixpkgs.config change,
but I don't see why. Maybe the option evaluation is too strict.
svn path=/nixos/trunk/; revision=16878
be set from the NixOS configuration. For instance, you can say
nixpkgs.config.firefox.enableGeckoMediaPlayer = true;
environment.systemPackages = [ pkgs.firefox ];
but the more interesting application is to apply global overrides to
Nixpkgs throughout NixOS, e.g.
nixpkgs.config.packageOverrides = pkgs:
{ glibc = pkgs.glibc27;
gcc = pkgs.gcc42;
};
would build the whole system with Glibc 2.7 and GCC 4.2. (There are
some issues with "useFromStdenv" in all-packages.nix that need to be
fixed for packages in the stdenv bootstrap though.)
The implementation of this option is kind of evil though due to the
need to prevent a circularity between the evaluation of
nixpkgs.config and the "pkgs" module argument.
svn path=/nixos/trunk/; revision=16866
machine containing a replica (minus the state) of the system
configuration. This is mostly useful for testing configuration
changes prior to doing an actual "nixos-rebuild switch" (or even
"nixos-rebuild test"). The VM can be started as follows:
$ nixos-rebuild build-vm
$ ./result/bin/run-*-vm
which starts a KVM/QEMU instance. Additional QEMU options can be
passed through the QEMU_OPTS environment variable
(e.g. QEMU_OPTS="-redir tcp:8080::80" to forward a host port to the
guest). The fileSystem attribute of the regular system
configuration is ignored (using mkOverride), because obviously we
can't allow the VM to access the host's block devices. Instead, at
startup the VM creates an empty disk image in ./<hostname>.qcow2 to
store the VM's root filesystem.
Building a VM in this way is efficient because the VM shares its Nix
store with the host (through a CIFS mount). However, because the
Nix store of the host is mounted read-only in the guest, you cannot
run Nix build actions inside the VM. Therefore the VM can only be
reconfigured by re-running "nixos-rebuild build-vm" on the host and
restarting the VM.
svn path=/nixos/trunk/; revision=16662
for a separate tree.
* Pass the path of the modules tree to modules so that you don't have
to write absolute paths, e.g. you can say
require = [ "${modulesPath}/hardware/network/intel-3945abg.nix" ];
instead of
require = [ /etc/nixos/nixos/hardware/network/intel-3945abg.nix ];
The latter is bad because it makes it hard to build from a different
NixOS source tree.
svn path=/nixos/branches/modular-nixos/; revision=16350