That's confusing and wrong: nixos-hardware-scan should just enable
support for the detected hardware, not enable lots of software (let
alone KDE).
svn path=/nixos/trunk/; revision=30325
After the change from revision 30103, nixos-rebuild suddenly consumed
freaky amounts of memory. I had to abort the process after it had
allocated well in excess of 30GB(!) of RAM. I'm not sure what is causing
this behavior, but undoing that assignment fixes the problem. The other
two commits needed to be revoked, too, because they depend on 30103.
svn path=/nixos/trunk/; revision=30127
was never intended as a generic "check out anything" script; it's
just a convenience script to obtain the NixOS trunk after
installation. So that's what it should do.
svn path=/nixos/trunk/; revision=27005
cards because the default X config contains the Intel driver.
Likewise, there is no need for the "vesa" default.
* nixos-hardware-scan: Clean up the output a bit.
svn path=/nixos/trunk/; revision=26423
hardware scan was generating a hardware.nix containing
"pkgs.linuxPackages" without having "pkgs" in scope. Also, it
shouldn't define boot.kernelPackages.
svn path=/nixos/trunk/; revision=25192
attribute name of the machine in the model. This allows
networking.hostName and deployment.targetHost to be omitted for
typical networks.
svn path=/nixos/trunk/; revision=25125
- implemented --no-out-link option so that invoking these tools from scripts leave no garbage behind
- some misc. cleanups
svn path=/nixos/trunk/; revision=25019
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410
{
test1 = {pkgs, config, ...}:
{
# NixOS config of machine test1
...
};
test2 = {pkgs, config, ...}:
{
# NixOS config of machine test2
...
};
}
And an infrastructure expression, e.g:
{
test1 = {
hostName = "test1.example.org";
system = "i686-linux";
};
test2 = {
hostName = "test2.example.org";
system = "x86_64-linux";
};
}
And by executing:
nixos-deploy-network -n network.nix -i infrastructure.nix
The system configurations in the network expression are built, transferred to the machines in the network and finally activated.
svn path=/nixos/trunk/; revision=24146
devices. These are used to replace hand made listings in the basic
installation CD.
The configuration file, which is generated by nixos-hardware-scan, enables
not-detected devices by default.
svn path=/nixos/trunk/; revision=23911
like `build-vm', but boots using the regular boot loader (i.e. GRUB
1 or 2) rather than booting directly from the kernel/initrd. Thus
it allows testing of GRUB.
svn path=/nixos/trunk/; revision=23747