Skip to content
This repository was archived by the owner on Apr 12, 2021. It is now read-only.
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: NixOS/nixpkgs-channels
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 3ba0d9f75ccf
Choose a base ref
...
head repository: NixOS/nixpkgs-channels
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 724dbda1e0ca
Choose a head ref
Loading
Showing 323 changed files with 6,935 additions and 2,770 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -44,9 +44,9 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/).

* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 19.03 release](https://hydra.nixos.org/jobset/nixos/release-19.03)
* [Continuous package builds for the NixOS 19.09 release](https://hydra.nixos.org/jobset/nixos/release-19.09)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 19.03 release](https://hydra.nixos.org/job/nixos/release-19.03/tested#tabs-constituents)
* [Tests for the NixOS 19.09 release](https://hydra.nixos.org/job/nixos/release-19.09/tested#tabs-constituents)

Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
15 changes: 1 addition & 14 deletions doc/languages-frameworks/beam.xml
Original file line number Diff line number Diff line change
@@ -55,20 +55,7 @@
<title>Rebar3</title>

<para>
By default, Rebar3 wants to manage its own dependencies. This is perfectly acceptable in the normal, non-Nix setup, but in the Nix world, it is not. To rectify this, we provide two versions of Rebar3:
<itemizedlist>
<listitem>
<para>
<literal>rebar3</literal>: patched to remove the ability to download anything. When not running it via <literal>nix-shell</literal> or <literal>nix-build</literal>, it's probably not going to work as desired.
</para>
</listitem>
<listitem>
<para>
<literal>rebar3-open</literal>: the normal, unmodified Rebar3. It should work exactly as would any other version of Rebar3. Any Erlang package should rely on <literal>rebar3</literal> instead. See <xref
linkend="rebar3-packages"/>.
</para>
</listitem>
</itemizedlist>
We provide a version of Rebar3, which is the normal, unmodified Rebar3, under <literal>rebar3</literal>. We also provide a helper to fetch Rebar3 dependencies from a lockfile under <literal>fetchRebar3Deps</literal>.
</para>
</section>

32 changes: 26 additions & 6 deletions nixos/doc/manual/administration/boot-problems.xml
Original file line number Diff line number Diff line change
@@ -6,15 +6,22 @@
<title>Boot Problems</title>

<para>
If NixOS fails to boot, there are a number of kernel command line parameters that may help you to identify or fix the issue. You can add these parameters in the GRUB boot menu by pressing “e” to modify the selected boot entry and editing the line starting with <literal>linux</literal>. The following are some useful kernel command line parameters that are recognised by the NixOS boot scripts or by systemd:
If NixOS fails to boot, there are a number of kernel command line parameters
that may help you to identify or fix the issue. You can add these parameters
in the GRUB boot menu by pressing “e” to modify the selected boot entry
and editing the line starting with <literal>linux</literal>. The following
are some useful kernel command line parameters that are recognised by the
NixOS boot scripts or by systemd:
<variablelist>
<varlistentry>
<term>
<literal>boot.shell_on_fail</literal>
</term>
<listitem>
<para>
Start a root shell if something goes wrong in stage 1 of the boot process (the initial ramdisk). This is disabled by default because there is no authentication for the root shell.
Start a root shell if something goes wrong in stage 1 of the boot process
(the initial ramdisk). This is disabled by default because there is no
authentication for the root shell.
</para>
</listitem>
</varlistentry>
@@ -24,7 +31,10 @@
</term>
<listitem>
<para>
Start an interactive shell in stage 1 before anything useful has been done. That is, no modules have been loaded and no file systems have been mounted, except for <filename>/proc</filename> and <filename>/sys</filename>.
Start an interactive shell in stage 1 before anything useful has been
done. That is, no modules have been loaded and no file systems have been
mounted, except for <filename>/proc</filename> and
<filename>/sys</filename>.
</para>
</listitem>
</varlistentry>
@@ -44,7 +54,11 @@
</term>
<listitem>
<para>
Boot into rescue mode (a.k.a. single user mode). This will cause systemd to start nothing but the unit <literal>rescue.target</literal>, which runs <command>sulogin</command> to prompt for the root password and start a root login shell. Exiting the shell causes the system to continue with the normal boot process.
Boot into rescue mode (a.k.a. single user mode). This will cause systemd
to start nothing but the unit <literal>rescue.target</literal>, which
runs <command>sulogin</command> to prompt for the root password and start
a root login shell. Exiting the shell causes the system to continue with
the normal boot process.
</para>
</listitem>
</varlistentry>
@@ -54,7 +68,8 @@
</term>
<listitem>
<para>
Make systemd very verbose and send log messages to the console instead of the journal.
Make systemd very verbose and send log messages to the console instead of
the journal.
</para>
</listitem>
</varlistentry>
@@ -65,6 +80,11 @@
</para>

<para>
If no login prompts or X11 login screens appear (e.g. due to hanging dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start rescue mode (described above). (Also note that since most units have a 90-second timeout before systemd gives up on them, the <command>agetty</command> login prompts should appear eventually unless something is very wrong.)
If no login prompts or X11 login screens appear (e.g. due to hanging
dependencies), you can press Alt+ArrowUp. If you’re lucky, this will start
rescue mode (described above). (Also note that since most units have a
90-second timeout before systemd gives up on them, the
<command>agetty</command> login prompts should appear eventually unless
something is very wrong.)
</para>
</section>
32 changes: 24 additions & 8 deletions nixos/doc/manual/administration/cleaning-store.xml
Original file line number Diff line number Diff line change
@@ -5,43 +5,59 @@
xml:id="sec-nix-gc">
<title>Cleaning the Nix Store</title>
<para>
Nix has a purely functional model, meaning that packages are never upgraded in place. Instead new versions of packages end up in a different location in the Nix store (<filename>/nix/store</filename>). You should periodically run Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced packages. This is easy:
Nix has a purely functional model, meaning that packages are never upgraded
in place. Instead new versions of packages end up in a different location in
the Nix store (<filename>/nix/store</filename>). You should periodically run
Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy:
<screen>
<prompt>$ </prompt>nix-collect-garbage
</screen>
Alternatively, you can use a systemd unit that does the same in the background:
Alternatively, you can use a systemd unit that does the same in the
background:
<screen>
<prompt># </prompt>systemctl start nix-gc.service
</screen>
You can tell NixOS in <filename>configuration.nix</filename> to run this unit automatically at certain points in time, for instance, every night at 03:15:
You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15:
<programlisting>
<xref linkend="opt-nix.gc.automatic"/> = true;
<xref linkend="opt-nix.gc.dates"/> = "03:15";
</programlisting>
</para>
<para>
The commands above do not remove garbage collector roots, such as old system configurations. Thus they do not remove the ability to roll back to previous configurations. The following command deletes old roots, removing the ability to roll back to them:
The commands above do not remove garbage collector roots, such as old system
configurations. Thus they do not remove the ability to roll back to previous
configurations. The following command deletes old roots, removing the ability
to roll back to them:
<screen>
<prompt>$ </prompt>nix-collect-garbage -d
</screen>
You can also do this for specific profiles, e.g.
<screen>
<prompt>$ </prompt>nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen>
Note that NixOS system configurations are stored in the profile <filename>/nix/var/nix/profiles/system</filename>.
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.
</para>
<para>
Another way to reclaim disk space (often as much as 40% of the size of the Nix store) is to run Nix’s store optimiser, which seeks out identical files in the store and replaces them with hard links to a single copy.
Another way to reclaim disk space (often as much as 40% of the size of the
Nix store) is to run Nix’s store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy.
<screen>
<prompt>$ </prompt>nix-store --optimise
</screen>
Since this command needs to read the entire Nix store, it can take quite a while to finish.
Since this command needs to read the entire Nix store, it can take quite a
while to finish.
</para>
<section xml:id="sect-nixos-gc-boot-entries">
<title>NixOS Boot Entries</title>

<para>
If your <filename>/boot</filename> partition runs out of space, after clearing old profiles you must rebuild your system with <literal>nixos-rebuild</literal> to update the <filename>/boot</filename> partition and clear space.
If your <filename>/boot</filename> partition runs out of space, after
clearing old profiles you must rebuild your system with
<literal>nixos-rebuild</literal> to update the <filename>/boot</filename>
partition and clear space.
</para>
</section>
</chapter>
26 changes: 21 additions & 5 deletions nixos/doc/manual/administration/container-networking.xml
Original file line number Diff line number Diff line change
@@ -6,7 +6,10 @@
<title>Container Networking</title>

<para>
When you create a container using <literal>nixos-container create</literal>, it gets it own private IPv4 address in the range <literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address as follows:
When you create a container using <literal>nixos-container create</literal>,
it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address
as follows:
<screen>
<prompt># </prompt>nixos-container show-ip foo
10.233.4.2
@@ -17,21 +20,34 @@
</para>

<para>
Networking is implemented using a pair of virtual Ethernet devices. The network interface in the container is called <literal>eth0</literal>, while the matching interface in the host is called <literal>ve-<replaceable>container-name</replaceable></literal> (e.g., <literal>ve-foo</literal>). The container has its own network namespace and the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary network configuration such as setting up firewall rules, without affecting or having access to the host’s network.
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called <literal>eth0</literal>, while
the matching interface in the host is called
<literal>ve-<replaceable>container-name</replaceable></literal> (e.g.,
<literal>ve-foo</literal>). The container has its own network namespace and
the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary
network configuration such as setting up firewall rules, without affecting or
having access to the host’s network.
</para>

<para>
By default, containers cannot talk to the outside network. If you want that, you should set up Network Address Translation (NAT) rules on the host to rewrite container traffic to use your external IP address. This can be accomplished using the following configuration on the host:
By default, containers cannot talk to the outside network. If you want that,
you should set up Network Address Translation (NAT) rules on the host to
rewrite container traffic to use your external IP address. This can be
accomplished using the following configuration on the host:
<programlisting>
<xref linkend="opt-networking.nat.enable"/> = true;
<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"];
<xref linkend="opt-networking.nat.externalInterface"/> = "eth0";
</programlisting>
where <literal>eth0</literal> should be replaced with the desired external interface. Note that <literal>ve-+</literal> is a wildcard that matches all container interfaces.
where <literal>eth0</literal> should be replaced with the desired external
interface. Note that <literal>ve-+</literal> is a wildcard that matches all
container interfaces.
</para>

<para>
If you are using Network Manager, you need to explicitly prevent it from managing container interfaces:
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
<programlisting>
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
</programlisting>
19 changes: 16 additions & 3 deletions nixos/doc/manual/administration/containers.xml
Original file line number Diff line number Diff line change
@@ -5,15 +5,28 @@
xml:id="ch-containers">
<title>Container Management</title>
<para>
NixOS allows you to easily run other NixOS instances as <emphasis>containers</emphasis>. Containers are a light-weight approach to virtualisation that runs software in the container at the same speed as in the host system. NixOS containers share the Nix store of the host, making container creation very efficient.
NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight approach to
virtualisation that runs software in the container at the same speed as in
the host system. NixOS containers share the Nix store of the host, making
container creation very efficient.
</para>
<warning>
<para>
Currently, NixOS containers are not perfectly isolated from the host system. This means that a user with root access to the container can do things that affect the host. So you should not give container root access to untrusted users.
Currently, NixOS containers are not perfectly isolated from the host system.
This means that a user with root access to the container can do things that
affect the host. So you should not give container root access to untrusted
users.
</para>
</warning>
<para>
NixOS containers can be created in two ways: imperatively, using the command <command>nixos-container</command>, and declaratively, by specifying them in your <filename>configuration.nix</filename>. The declarative approach implies that containers get upgraded along with your host system when you run <command>nixos-rebuild</command>, which is often not what you want. By contrast, in the imperative approach, containers are configured and updated independently from the host system.
NixOS containers can be created in two ways: imperatively, using the command
<command>nixos-container</command>, and declaratively, by specifying them in
your <filename>configuration.nix</filename>. The declarative approach implies
that containers get upgraded along with your host system when you run
<command>nixos-rebuild</command>, which is often not what you want. By
contrast, in the imperative approach, containers are configured and updated
independently from the host system.
</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />
33 changes: 27 additions & 6 deletions nixos/doc/manual/administration/control-groups.xml
Original file line number Diff line number Diff line change
@@ -5,10 +5,16 @@
xml:id="sec-cgroups">
<title>Control Groups</title>
<para>
To keep track of the processes in a running system, systemd uses <emphasis>control groups</emphasis> (cgroups). A control group is a set of processes used to allocate resources such as CPU, memory or I/O bandwidth. There can be multiple control group hierarchies, allowing each kind of resource to be managed independently.
To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a set of
processes used to allocate resources such as CPU, memory or I/O bandwidth.
There can be multiple control group hierarchies, allowing each kind of
resource to be managed independently.
</para>
<para>
The command <command>systemd-cgls</command> lists all control groups in the <literal>systemd</literal> hierarchy, which is what systemd uses to keep track of the processes belonging to each service or user session:
The command <command>systemd-cgls</command> lists all control groups in the
<literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session:
<screen>
<prompt>$ </prompt>systemd-cgls
├─user
@@ -26,19 +32,34 @@
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ <replaceable>...</replaceable>
</screen>
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU hierarchy, which allows per-cgroup CPU scheduling priorities. By default, every systemd service gets its own CPU cgroup, while all user sessions are in the top-level CPU cgroup. This ensures, for instance, that a thousand run-away processes in the <literal>httpd.service</literal> cgroup cannot starve the CPU for one process in the <literal>postgresql.service</literal> cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL process would get 1/1001 of the cgroup’s CPU time.) You can limit a service’s CPU share in <filename>configuration.nix</filename>:
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU
hierarchy, which allows per-cgroup CPU scheduling priorities. By default,
every systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the <literal>httpd.service</literal> cgroup cannot
starve the CPU for one process in the <literal>postgresql.service</literal>
cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL
process would get 1/1001 of the cgroup’s CPU time.) You can limit a
service’s CPU share in <filename>configuration.nix</filename>:
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512;
</programlisting>
By default, every cgroup has 1024 CPU shares, so this will halve the CPU allocation of the <literal>httpd.service</literal> cgroup.
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the <literal>httpd.service</literal> cgroup.
</para>
<para>
There also is a <literal>memory</literal> hierarchy that controls memory allocation limits; by default, all processes are in the top-level cgroup, so any service or session can exhaust all available memory. Per-cgroup memory limits can be specified in <filename>configuration.nix</filename>; for instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM (excluding swap):
There also is a <literal>memory</literal> hierarchy that controls memory
allocation limits; by default, all processes are in the top-level cgroup, so
any service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in <filename>configuration.nix</filename>; for
instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM
(excluding swap):
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M";
</programlisting>
</para>
<para>
The command <command>systemd-cgtop</command> shows a continuously updated list of all cgroups with their CPU and memory usage.
The command <command>systemd-cgtop</command> shows a continuously updated
list of all cgroups with their CPU and memory usage.
</para>
</chapter>
Loading