or Google Earth) on 64-bit NixOS on NVIDIA hardware. The 32-bit
OpenGL library is symlinked from /var/run/opengl-driver-32, which is
added to the LD_LIBRARY_PATH so that 32-bit binaries can find it.
svn path=/nixos/trunk/; revision=22062
function argument, so that the test script can refer to computed
values such as the assigned IP addresses of the virtual machines.
svn path=/nixos/trunk/; revision=21939
machine can now declare an option `virtualisation.vlans' that causes
it to have network interfaces connected to each listed virtual
network. For instance,
virtualisation.vlans = [ 1 2 ];
causes the machine to have two interfaces (in addition to eth0, used
by the test driver to control the machine): eth1 connected to
network 1 with IP address 192.168.1.<i>, and eth2 connected to
network 2 with address 192.168.2.<i> (where <i> is the index of the
machine in the `nodes' attribute set). On the other hand,
virtualisation.vlans = [ 2 ];
causes the machine to only have an eth1 connected to network 2 with
address 192.168.2.<i>. So each virtual network <n> is assigned the
IP range 192.168.<n>.0/24.
Each virtual network is implemented using a separate multicast
address on the host, so guests really cannot talk to networks to
which they are not connected.
* Added a simple NAT test to demonstrate this.
* Added an option `virtualisation.qemu.options' to specify QEMU
command-line options. Used to factor out some commonality between
the test driver script and the interactive test script.
svn path=/nixos/trunk/; revision=21928
What I want with this derivation is to allow the sheevaplug nixos to
build a tarball with all the needed files to boot. Then, this can be
unpacked into an SD card, or into a NFS/TFTP server, and then the
user can boot the system with help of the uboot console.
By now, I have only tried to build the tarball in a PC, in order
to develop the nix expressions quicker.
There is nothing written specialy for the Sheevaplug in all this,
by now.
svn path=/nixos/trunk/; revision=20035
console. This uses the `sendkey' command in the QEMU monitor.
* For the block/unblock primitives, use the `set_link' command in the
QEMU monitor.
svn path=/nixos/trunk/; revision=19854
it special commands such as "screendump", "sendkey" and so on.
* Take screenshots using the "screendump" command. This has the
advantage over "scrot" that it also supports taking a picture of the
console, and is not affected by weird X visuals.
svn path=/nixos/trunk/; revision=19837
its default behaviour is to stop the emulator (i.e. suspend the VM).
For automated tests, this is bad, because is makes the VM appear to
hang without any error message. The "werror=report" flag causes
QEMU to report the problem to the VM. As a side effect QEMU exits
very elegantly:
[ 2.308668] end_request: I/O error, dev vda, sector 534400
[ 2.309611] Buffer I/O error on device vda, logical block 66800
...
*** glibc detected *** /nix/store/yhngqrww53j0aw7z7v4bv948x5g5fc3d-qemu-kvm-0.12.1.2/bin/qemu-system-x86_64: double free or corruption (!prev): 0x08e3e040 ***
Aborted
So I guess we now depend on a bug in QEMU :-)
svn path=/nixos/trunk/; revision=19703
* Factored out some commonality between tests to make them a bit
simpler to write. A test is a function { pkgs, ... }: -> { nodes,
testScript } or { machine, testScript }. So it's no longer
necessary to have a "vms" attribute in every test.
svn path=/nixos/trunk/; revision=19220
write some magic string to ttyS0. This removes the dependency on
having a CIFS mount.
* Use a thread to process the stdout/stderr of each QEMU instance.
* Add a kernel command line parameter "stage1panic" to tell stage 1 to
panic if an error occurs. This is faster than waiting until
connect() times out.
svn path=/nixos/trunk/; revision=19212
feature is hard to maintain and because this a potential source of error.
Imports are only accepted inside named modules where the system has some
control over mutual inclusion.
svn path=/nixos/trunk/; revision=17143
lib/build-vms.nix contains a function `buildVirtualNetwork' that
takes a specification of a network of machines (as an attribute set
of NixOS machine configurations) and builds a script that starts
each configuration in a separate QEMU/KVM VM and connects them
together in a virtual network. This script can be run manually to
test the VMs interactively. There is also a function `runTests'
that starts and runs the virtual network in a derivation, and
then executes a test specification that tells the VMs to do certain
things (i.e., letting one VM send an HTTP request to a webserver on
another VM). The tests are written in Perl (for now).
tests/subversion.nix shows a simple example, namely a network of two
machines: a webserver that runs the Subversion subservice, and a
client. Apache, Subversion and a few other packages are built with
coverage analysis instrumentation. For instance,
$ nix-build tests/subversion.nix -A vms
$ ./result/bin/run-vms
starts two QEMU/KVM instances. When they have finished booting, the
webserver can be accessed from the host through
http://localhost:8081/.
It also has a small test suite:
$ nix-build tests/subversion.nix -A report
This runs the VMs in a derivation, runs the tests, and then produces
a distributed code coverage analysis report (i.e. it shows the
combined coverage on both machines).
The Perl test driver program is in lib/test-driver. It executes
commands on the guest machines by connecting to a root shell running
on port 514 (provided by modules/testing/test-instrumentation.nix).
The VMs are connected together in a virtual network using QEMU's
multicast feature. This isn't very secure. At the very least,
other processes on the same machine can listen to or send packets on
the virtual network. On the plus side, we don't need to be root to
set up a multicast virtual network, so we can do it from a
derivation. Maybe we can use VDE instead.
(Moved from the vario repository.)
svn path=/nixos/trunk/; revision=16899
into one argument "modules".
* release.nix: fixed the manual job.
* ISO generation: break an infinite recursion. Don't know why this
suddenly happens. Probably because of the nixpkgs.config change,
but I don't see why. Maybe the option evaluation is too strict.
svn path=/nixos/trunk/; revision=16878
be set from the NixOS configuration. For instance, you can say
nixpkgs.config.firefox.enableGeckoMediaPlayer = true;
environment.systemPackages = [ pkgs.firefox ];
but the more interesting application is to apply global overrides to
Nixpkgs throughout NixOS, e.g.
nixpkgs.config.packageOverrides = pkgs:
{ glibc = pkgs.glibc27;
gcc = pkgs.gcc42;
};
would build the whole system with Glibc 2.7 and GCC 4.2. (There are
some issues with "useFromStdenv" in all-packages.nix that need to be
fixed for packages in the stdenv bootstrap though.)
The implementation of this option is kind of evil though due to the
need to prevent a circularity between the evaluation of
nixpkgs.config and the "pkgs" module argument.
svn path=/nixos/trunk/; revision=16866
machine containing a replica (minus the state) of the system
configuration. This is mostly useful for testing configuration
changes prior to doing an actual "nixos-rebuild switch" (or even
"nixos-rebuild test"). The VM can be started as follows:
$ nixos-rebuild build-vm
$ ./result/bin/run-*-vm
which starts a KVM/QEMU instance. Additional QEMU options can be
passed through the QEMU_OPTS environment variable
(e.g. QEMU_OPTS="-redir tcp:8080::80" to forward a host port to the
guest). The fileSystem attribute of the regular system
configuration is ignored (using mkOverride), because obviously we
can't allow the VM to access the host's block devices. Instead, at
startup the VM creates an empty disk image in ./<hostname>.qcow2 to
store the VM's root filesystem.
Building a VM in this way is efficient because the VM shares its Nix
store with the host (through a CIFS mount). However, because the
Nix store of the host is mounted read-only in the guest, you cannot
run Nix build actions inside the VM. Therefore the VM can only be
reconfigured by re-running "nixos-rebuild build-vm" on the host and
restarting the VM.
svn path=/nixos/trunk/; revision=16662
for a separate tree.
* Pass the path of the modules tree to modules so that you don't have
to write absolute paths, e.g. you can say
require = [ "${modulesPath}/hardware/network/intel-3945abg.nix" ];
instead of
require = [ /etc/nixos/nixos/hardware/network/intel-3945abg.nix ];
The latter is bad because it makes it hard to build from a different
NixOS source tree.
svn path=/nixos/branches/modular-nixos/; revision=16350
iso-image.nix contains the minimal stuff necessary to build a
bootable ISO image containing the given configuration. The idea is
that this can be customised by providing additional modules, e.g. to
add extra packages to the image.
The ISO image is exported in the configuration attribute
system.build.isoImage. So it can be built as follows:
$ nix-build lib/eval-config.nix \
--arg configuration 'import ./modules/installer/cd-dvd/iso-image.nix' \
-A config.system.build.isoImage
svn path=/nixos/branches/modular-nixos/; revision=15871