Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: NixOS/nixops
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 235a6896acbb
Choose a base ref
...
head repository: NixOS/nixops
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 2454e89633b6
Choose a head ref
  • 2 commits
  • 3 files changed
  • 2 contributors

Commits on Jun 10, 2020

  1. Copy the full SHA
    bc67d70 View commit details
  2. Merge pull request #1362 from nlewo/intro-overview-rest

    Convert overview and introduction from DocBook to ReST
    adisbladis authored Jun 10, 2020
    Copy the full SHA
    2454e89 View commit details
Showing with 367 additions and 0 deletions.
  1. +10 −0 doc/index.rst
  2. +70 −0 doc/introduction.rst
  3. +287 −0 doc/overview.rst
10 changes: 10 additions & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,15 @@
NixOps
======
.. toctree::
:maxdepth: 1

introduction

.. toctree::
:maxdepth: 1

overview

.. toctree::
:maxdepth: 1
:caption: User Guides:
70 changes: 70 additions & 0 deletions doc/introduction.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
Introduction
------------

NixOps is a tool for deploying NixOS machines in a network or cloud.
It takes as input a declarative specification of a set of “logical”
machines and then performs any necessary steps or actions to realise
that specification: instantiate cloud machines, build and download
dependencies, stop and start services, and so on. NixOps has several
nice properties:

- It’s *declarative*: NixOps specifications state the desired
configuration of the machines, and NixOps then figures out the
actions necessary to realise that configuration. So there is no
difference between doing a new deployment or doing a redeployment:
the resulting machine configurations will be the same.

- It performs *fully automated* deployment. This is a good thing
because it ensures that deployments are reproducible.

- It performs provisioning. Based on the given deployment
specification, it will start missing virtual machines, create disk
volumes, and so on.

- It’s based on the `Nix package manager <http://nixos.org/nix/>`_,
which has a *purely functional* model that sets it apart from other
package managers. Concretely this means that multiple versions of
packages can coexist on a system, that packages can be upgraded or
rolled back atomically, that dependency specifications can be
guaranteed to be complete, and so on.

- It’s based on `NixOS <http://nixos.org/nixos/>`_, which has a
declarative approach to describing the desired configuration of a
machine. This makes it an ideal basis for automated configuration
management of sets of machines. NixOS also has desirable properties
such as (nearly) atomic upgrades, the ability to roll back to
previous configurations, and more.

- It’s *multi-cloud*. Machines in a single NixOps deployment can be
deployed to different target environments. For instance, one
logical machine can be deployed to a local “physical” machine,
another to an automatically instantiated Amazon EC2 instance in the
``eu-west-1`` region, another in the ``us-east-1`` region, and so
on.

- It supports *separation of “logical” and “physical” aspects* of a
deployment. NixOps specifications are modular, and this makes it
easy to separate the parts that say *what* logical machines should
do from *where* they should do it. For instance, the former might
say that machine X should run a PostgreSQL database and machine Y
should run an Apache web server, while the latter might state that X
should be instantiated as an EC2 ``m1.large`` machine while Y should
be instantiated as an ``m1.small``. We could also have a second
physical specification that says that X and Y should both be
instantiated as VirtualBox VMs on the developer’s workstation. So
the same logical specification can easily be deployed to different
environments.

- It uses a single formalism (the Nix expression language) for package
management and system configuration management. This makes it very
easy to add ad hoc packages to a deployment.

- It combines system configuration management and provisioning.
Provisioning affects configuration management: for instance, if we
instantiate an EC2 machine as part of a larger deployment, it may be
necessary to put the IP address or hostname of that machine in a
configuration file on another machine. NixOps takes care of this
automatically.

- It can provision non-machine cloud resources such as Amazon S3
buckets and EC2 key pairs.
287 changes: 287 additions & 0 deletions doc/overview.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,287 @@
Overview
--------

This section gives a quick overview of how to use NixOps.

.. _sec-deploying-to-physical-nixos:

Deploying to a NixOS machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To deploy to a machine that is already running NixOS, simply set
``deployment.targetHost`` to the IP address or host name of the
machine, and leave ``deployment.targetEnv`` undefined. See
:ref:`ex-physical-nixos.nix`.

.. _ex-physical-nixos.nix:

:file:`trivial-nixos.nix`: NixOS target physical network specification
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

::

{
webserver =
{ config, pkgs, ... }:
{ deployment.targetHost = "1.2.3.4";
};
}

Accessing machines
~~~~~~~~~~~~~~~~~~

You can login to individual machines by doing ``nixops ssh *name*``,
where ``*name*`` is the name of the machine.

It’s also possible to perform a command on all machines:

::

$ nixops ssh-for-each -d load-balancer-ec2 -- df /tmp
backend1...> /dev/xvdb 153899044 192084 145889336 1% /tmp
proxy......> /dev/xvdb 153899044 192084 145889336 1% /tmp
backend2...> /dev/xvdb 153899044 192084 145889336 1% /tmp

By default, the command is executed sequentially on each machine. You
can add the flag to execute it in parallel.

Checking machine status
~~~~~~~~~~~~~~~~~~~~~~~

The command :command:`nixops check` checks the status of each machine
in a deployment. It verifies that the machine still exists
(i.e. hasn’t been destroyed outside of NixOps), is up (i.e. the
instance has been started) and is reachable via SSH. It also checks
that any attached disks (such as EBS volumes) are not in a failed
state, and prints the names of any systemd units that are in a failed
state.

For example, for the 3-machine EC2 network shown above, it might
show:

::

$ nixops check -d load-balancer-ec2
+----------+--------+-----+-----------+----------+----------------+---------------+-------+
| Name | Exists | Up | Reachable | Disks OK | Load avg. | Failed units | Notes |
+----------+--------+-----+-----------+----------+----------------+---------------+-------+
| backend1 | Yes | Yes | Yes | Yes | 0.03 0.03 0.05 | httpd.service | |
| backend2 | Yes | No | N/A | N/A | | | |
| proxy | Yes | Yes | Yes | Yes | 0.00 0.01 0.05 | | |
+----------+--------+-----+-----------+----------+----------------+---------------+-------+

This indicates that Apache httpd has failed on``backend1`` and that
machine``backend2`` is not running at all. In this situation, you
should run :command:`nixops deploy --check` to repair the deployment.

Network special attributes
~~~~~~~~~~~~~~~~~~~~~~~~~~

It is possible to define special options for the whole network. For
example:

::

{
network = {
description = "staging environment";
enableRollback = true;
};

defaults = {
imports = [ ./common.nix ];
};

machine = { ... }: {};
}

Each attribute is explained below:

- ``defaults``: applies given NixOS module to all machines defined in the network.

- ``network.description``: a sentence describing the purpose of the
network for easier comparison when running :command:`nixops list`

- ``network.enableRollback``: if ``true``, each deployment creates a
new profile generation to able to run :command:`nixops rollback`.
Defaults to ``false``.

Network arguments
~~~~~~~~~~~~~~~~~

In NixOps you can pass in arguments from outside the nix
expression. The network file can be a nix function, which takes a set
of arguments which are passed in externally and can be used to change
configuration values, or even to generate a variable number of
machines in the network.

Here is an example of a network with network arguments:

::

{ maintenance ? false
}:
{
machine =
{ config, pkgs, ... }:
{ services.httpd.enable = maintenance;
...
};
}

This network has a *maintenance* argument that defaults to false. This
value can be used inside the network expression to set NixOS option,
in this case whether or not Apache HTTPD should be enabled on the
system.

You can pass network arguments using the set-args nixops command. For
example, if we want to set the maintenance argument to true in the
previous example, you can run:

::

$ nixops set-args --arg maintenance true -d argtest

The arguments that have been set will show up:

::

$ nixops info -d argtest
Network name: argtest
Network UUID: 634d6273-f9f6-11e2-a004-15393537e5ff
Network description: Unnamed NixOps network
Nix expressions: .../network-arguments.nix*Nix arguments: maintenance = true*

+---------+---------------+------+-------------+------------+
| Name | Status | Type | Resource Id | IP address |
+---------+---------------+------+-------------+------------+
| machine | Missing / New | none | | |
+---------+---------------+------+-------------+------------+

Running nixops deploy after changing the arguments will deploy the new
configuration.

Managing keys
~~~~~~~~~~~~~

Files in :file:`/nix/store/` are readable by every user on that host,
so storing secret keys embedded in nix derivations is insecure. To
address this, nixops provides the configuration option
`deployment.keys`, which nixops manages separately from the main
configuration derivation for each machine.

Add a key to a machine like so.

::

{
machine =
{ config, pkgs, ... }:
{
deployment.keys.my-secret.text = "shhh this is a secret";
deployment.keys.my-secret.user = "myuser";
deployment.keys.my-secret.group = "wheel";
deployment.keys.my-secret.permissions = "0640";
};
}

This will create a file :file:`/run/keys/my-secret` with the specified
contents, ownership, and permissions.

Among the key options, only ``text`` is required. The ``user`` and
``group`` options both default to ``"root"``, and ``permissions``
defaults to ``"0600"``.

Keys from ``deployment.keys`` are stored under :file:`/run/` on a
temporary filesystem and will not persist across a reboot. To send a
rebooted machine its keys, use :command:`nixops send-keys`. Note that
all :command:`nixops` commands implicitly upload keys when
appropriate, so manually sending keys should only be necessary after
an unattended reboot.

If you have a custom service that depends on a key from
``deployment.keys``, you can opt to let systemd track that
dependency. Each key gets a corresponding systemd service
``"${keyname}-key.service"`` which is active while the key is present,
and otherwise inactive when the key is absent. See
:ref:`key-dependency.nix` for how to set this up.

.. _key-dependency.nix:

:file:`key-dependency.nix`: track key dependence with systemd
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

::

{
machine =
{ config, pkgs, ... }:
{
deployment.keys.my-secret.text = "shhh this is a secret";

systemd.services.my-service = {
after = [ "my-secret-key.service" ];
wants = [ "my-secret-key.service" ];
script = ''
export MY_SECRET=$(cat /run/keys/my-secret)
run-my-program
'';
};
};
}

These dependencies will ensure that the service is only started when
the keys it requires are present. For example, after a reboot, the
services will be delayed until the keys are available, and
:command:`systemctl status` and friends will lead you to the cause.

Special NixOS module inputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~

In deployments with multiple machines, it is often convenient to
access the configuration of another node in the same network, e.g. if
you want to store a port number only once.

This is possible by using the extra NixOS module input ``nodes``.

::

{
network.description = "Gollum server and reverse proxy";

gollum =
{ config, pkgs, ... }:
{
services.gollum = {
enable = true;
port = 40273;
};
networking.firewall.allowedTCPPorts = [ config.services.gollum.port ];
};

reverseproxy =
{ config, pkgs, nodes, ... }:
let
gollumPort = nodes.gollum.config.services.gollum.port;
in
{
services.nginx = {
enable = true;
virtualHosts."wiki.example.net".locations."/" = {
proxyPass = "http://gollum:${toString gollumPort}";
};
};
networking.firewall.allowedTCPPorts = [ 80 ];
};
}

Moving the port number to a different value is now without the risk of
an inconsistent deployment.

Additional module inputs are

- ``name``: The name of the machine.

- ``uuid``: The NixOps UUID of the deployment.

- ``resources``: NixOps resources associated with the deployment.