Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] ipfs-cluster: systemd service #100871

Draft
wants to merge 40 commits into
base: master
Choose a base branch
from

Conversation

lordcirth
Copy link
Contributor

@lordcirth lordcirth commented Oct 17, 2020

Making this WIP PR to avoid someone duplicating effort while I work on it. It will need to be squashed.

Motivation for this change

ipfs-cluster is meant to be run as a daemon

Things done

Created systemd service for ipfs-cluster, and an -init service, with some basic settings

  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Ensured that relevant documentation is up to date
  • Fits CONTRIBUTING.md.

Copy link
Member

@aanderse aanderse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for writing this service. ipfs seems to be a hot topic lately, so I'm sure this service will be appreciated 👍

I hope you find my suggestion helpful. Please don't hesitate to ask if anything I have written isn't clear. I'm addition to my comments I have a few questions:

  • Do you know if upstream provides a systemd unit at all?
  • Have you considered any systemd hardening on this service?
  • Are any firewall holes required for proper operation?
  • Does this program take a configuration file at all?


systemd.packages = [ pkgs.ipfs-cluster pkgs.coreutils ];

systemd.services.ipfs-cluster-init = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this entire service could be replaced with an ExecStartPre?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so? I am using unitConfig.ConditionDirectoryNotEmpty = "!${cfg.dataDir}"; on -init to prevent it being run every time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you could have a preStart script that just checks if the directory exists then calls the init command if not. Save the hassle of a second unit. That's sorta what ExecStartPre is for.

opt = options.services.ipfs-cluster;

# secret is by envvar, not flag
initFlags = toString [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe initFlags = optionalString (cfg.initPeers != []) "--peers ${lib.concatStringsSep "," cfg.initPeers}";?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured there would be more flags soon enough

@lordcirth
Copy link
Contributor Author

Thanks for writing this service. ipfs seems to be a hot topic lately, so I'm sure this service will be appreciated +1

I hope you find my suggestion helpful. Please don't hesitate to ask if anything I have written isn't clear. I'm addition to my comments I have a few questions:

* Do you know if upstream provides a `systemd` unit at all?

* Have you considered any `systemd` hardening on this service?

* Are any firewall holes required for proper operation?

* Does this program take a configuration file at all?

Thank you for your feedback!

  • Upstream does not provide a .service file
  • I have not yet done any hardening, besides ensuring it runs as "ipfs". What kind of hardening would you consider?
  • It will require firewall exceptions to be added, yes
  • "ipfs-cluster-service init" creates $IPFS_CLUSTER_PATH/service.json automatically; the only mandatory option is "--consensus", with --peers and the secret covering the common uses

Thanks for pointing out the problem with readFile; git grepping nixpkgs, I see that duplicity uses EnvironmentFile = cfg.secretFile to have systemd read the secret at runtime.

@AluisioASG
Copy link
Contributor

This will need a way to change arbitrary settings (in particular, all the addresses in the config). I wrote a module a while ago that uses the environment variable overrides to enable that while avoiding edits to service.json. On second thought you might be able to merge settings into it with something like jq's recursive merge operator as well.

@lordcirth
Copy link
Contributor Author

Rebased to current master.

@bqv
Copy link
Contributor

bqv commented Jan 8, 2021

Very cool, +1 interest

@mmahut
Copy link
Member

mmahut commented Feb 12, 2021

Any updates on this PR, please?

@lordcirth
Copy link
Contributor Author

Any updates on this PR, please?

Yeah, sorry. I'm going to rebase and try to test that minimal functionality is working.

@mmahut
Copy link
Member

mmahut commented Feb 17, 2021

Would there be a way to write a test for this service?

@lordcirth
Copy link
Contributor Author

Would there be a way to write a test for this service?

You'd need to spawn 2 containers, connect them, pass data back and forth. I think this PR should go ahead without automated tests. It's been taking long enough. I'm trying to figure out nixos-build-vms to do some tests.

@lordcirth
Copy link
Contributor Author

Ok, tested the following so far:

  • init without secret generates one
  • init with secretFile works
  • initPeers works (only on first run)
  • daemon comes up on reboot
  • crdt mode works

@mmahut
Copy link
Member

mmahut commented Mar 11, 2021

Should we manage the user for the service?

Starting ipfs-cluster-init.service...
ipfs-cluster-init.service: Failed to determine user credentials: No such process
ipfs-cluster-init.service: Failed at step USER spawning /nix/store/yvdiqsnw2d5f04wcc6lgz5hl30711dfz-ipfs-cluster-0.13.0/bin/ipfs-cluster-service: No such process

Group = cfg.group;
};
};
networking.firewall.allowedTCPPorts = [ 9096 ];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think just opening a firewall port here is a good idea.

If you want this it should be hidden behind a openFirewall option that is off by default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a peer-to-peer service; it's expected that peers can connect. If enough other peers are available in the configured cluster, it should work without an open port, but not as well. In many configurations it just won't do anything without an open port.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lordcirth I think it's still best practice in nixpkgs to make it explicit via an openFirewall option

@lordcirth
Copy link
Contributor Author

Should we manage the user for the service?

Starting ipfs-cluster-init.service...
ipfs-cluster-init.service: Failed to determine user credentials: No such process
ipfs-cluster-init.service: Failed at step USER spawning /nix/store/yvdiqsnw2d5f04wcc6lgz5hl30711dfz-ipfs-cluster-0.13.0/bin/ipfs-cluster-service: No such process

It uses the ipfs user by default, which is created by the ipfs service. ipfs-cluster is largely useless without an ipfs daemon anyway.

@aanderse
Copy link
Member

@lordcirth I present "exhibit A": security.acme.acceptTerms... which does absolutely nothing at all.

Sometimes we just need the user to acknowledge that a module is implicitly doing something, by explicitly stating such 🤷‍♂️

@mmahut
Copy link
Member

mmahut commented Mar 29, 2021

What is the different between initPeers and having a bootstrap peers?

services.ipfs-cluster = {

enable = mkEnableOption
"Pinset orchestration for IPFS - requires ipfs daemon to be useful";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reads like I'd have to do something with ipfs daemon to make this work, but the daemon is started by this service, right?

While the first paragraph from their repo is not all that much more informative, it at least contains more information to understand what this is even about:

IPFS Cluster provides data orchestration across a swarm of IPFS daemons by allocating, replicating and tracking a global pinset distributed among multiple peers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this starts the ipfs-cluster daemon. You would usually want to enable the ipfs daemon as well, but it's not a hard dependency as you could be running ipfs via a different module, in a separate container, etc.

@@ -82,6 +82,7 @@ in {
wantedBy = [ "default.target" ];

serviceConfig = {
# "" clears exec list (man systemd.service -> execStart)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is such an obscure thing to do, seriously systemd... I had to read carefully to find that, and it seems it does not even apply to all list-type options.
https://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart=

@fricklerhandwerk
Copy link
Contributor

LGTM

@srid
Copy link
Contributor

srid commented Sep 3, 2021

Is there anything preventing us from getting this merged?

(I'm looking forward to using it, or even testing it!)

@mohe2015
Copy link
Contributor

mohe2015 commented Sep 3, 2021

Is there anything preventing us from getting this merged?

(I'm looking forward to using it, or even testing it!)

You should test this before this get's merged so we know it works :D (at least that would be great)

@stale
Copy link

stale bot commented Apr 18, 2022

I marked this as stale due to inactivity. → More info

@stale stale bot added 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md and removed 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md labels Apr 18, 2022
@McSinyx
Copy link
Member

McSinyx commented May 2, 2022

1.0.0 has recently been out with breaking changes, need anything be done aside from the version bump here?

@lordcirth
Copy link
Contributor Author

1.0.0 has recently been out with breaking changes, need anything be done aside from the version bump here?

"There are no breaking configuration changes". Sounds like the breaking is in the API.

@1000101
Copy link
Member

1000101 commented Jun 11, 2022

👀

@1000101
Copy link
Member

1000101 commented Jul 21, 2022

Anything that's blocking the merge? I've been running servers based on this PR for more than a year now.

@McSinyx
Copy link
Member

McSinyx commented Sep 29, 2022

@lordcirth, could you please rebase this on top of master or unstable?

@bachp
Copy link
Member

bachp commented Oct 29, 2022

@lordcirth Are you still working on this MR?

@lordcirth
Copy link
Contributor Author

@lordcirth Are you still working on this MR?

Not really, sorry.

@stale stale bot added the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Jun 18, 2023
@sorki sorki mentioned this pull request Feb 29, 2024
13 tasks
@stale stale bot removed the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Mar 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet