cards because the default X config contains the Intel driver.
Likewise, there is no need for the "vesa" default.
* nixos-hardware-scan: Clean up the output a bit.
svn path=/nixos/trunk/; revision=26423
This is a followup to r23775 ("Substitute the path of the system
derivation directly in the stage 2 init script.").
svn path=/nixos/trunk/; revision=25582
hardware scan was generating a hardware.nix containing
"pkgs.linuxPackages" without having "pkgs" in scope. Also, it
shouldn't define boot.kernelPackages.
svn path=/nixos/trunk/; revision=25192
attribute name of the machine in the model. This allows
networking.hostName and deployment.targetHost to be omitted for
typical networks.
svn path=/nixos/trunk/; revision=25125
- implemented --no-out-link option so that invoking these tools from scripts leave no garbage behind
- some misc. cleanups
svn path=/nixos/trunk/; revision=25019
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
{
test1 = {pkgs, config, ...}:
{
# NixOS config of machine test1
...
};
test2 = {pkgs, config, ...}:
{
# NixOS config of machine test2
...
};
}
And an infrastructure expression, e.g:
{
test1 = {
hostName = "test1.example.org";
system = "i686-linux";
};
test2 = {
hostName = "test2.example.org";
system = "x86_64-linux";
};
}
And by executing:
nixos-deploy-network -n network.nix -i infrastructure.nix
The system configurations in the network expression are built, transferred to the machines in the network and finally activated.
svn path=/nixos/trunk/; revision=24146