Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nixos/tmp: Make /tmp on ramdisk usable again #107497

Merged
merged 1 commit into from Jan 8, 2021

Conversation

dasJ
Copy link
Member

@dasJ dasJ commented Dec 23, 2020

@poettering decided we only need a limited number of inodes in our /tmp,
so why not limit that for every systemd user? That makes medium-sized nix
builds impossible so this commit restores the old behaviour which is the
kernel default of half the number of physical RAM pages which does not
seem too unreasonable to me.

Motivation for this change

I want to build stuff

Things done
  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Ensured that relevant documentation is up to date
  • Fits CONTRIBUTING.md.

{
what = "tmpfs";
where = "/tmp";
mountConfig.Options = [ "mode=1777" "strictatime" "rw" "nosuid" "nodev" "size=50%" "huge=within_size" ];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can the huge=within_size be in a separate commit at least? It feels like a performance optimization that should be done on top of a fix (as in, if it causes regressions, I want to be able to revert it separately).

I'd even argue and say this is something very special that users should probably set on their own (or why is it not a default in general?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the option altogether since people who want to use this functionality can just use the module systems's merge function to add options.

@poettering decided we only need a limited number of inodes in our /tmp,
so why not limit that for every systemd user? That makes medium-sized nix
builds impossible so this commit restores the old behaviour which is the
kernel default of half the number of physical RAM pages which does not
seem too unreasonable to me.
@flokli flokli merged commit 964c419 into NixOS:master Jan 8, 2021
@ajs124 ajs124 deleted the fix/tmp-mount branch January 8, 2021 23:14
@cpcloud
Copy link
Contributor

cpcloud commented Jan 9, 2021

After applying this change to one of my machines, the tmp.mount unit fails to start with:

Jan 09 08:44:30 albatross systemd[1]: Mounting /tmp...
Jan 09 08:44:30 albatross mount[7403]: mount: /tmp: special device tmpfs does not exist.
Jan 09 08:44:30 albatross systemd[1]: tmp.mount: Mount process exited, code=exited, status=32/n/a
Jan 09 08:44:30 albatross systemd[1]: tmp.mount: Failed with result 'exit-code'.
Jan 09 08:44:30 albatross systemd[1]: Failed to mount /tmp.

This documentation (https://www.freedesktop.org/software/systemd/man/systemd.mount.html) suggests that Type isn't necessary, but maybe for some reason it is here?

@endocrimes
Copy link
Member

@cpcloud hah I just lost too much of my morning to the same - filled a fix in #108860

@worldofpeace
Copy link
Contributor

@poettering decided we only need a limited number of inodes in our /tmp,
so why not limit that for every systemd user? That makes medium-sized nix
builds impossible so this commit restores the old behaviour which is the
kernel default of half the number of physical RAM pages which does not
seem too unreasonable to me.

Let's not be aggressive towards Lennart. We're not the first downstream to have to undo this coreos/fedora-coreos-config#685, and in nixos alone our systemd expression undoes a lot of stuff 😁
They've since increased it to 50% which is similar to what this PR itself does systemd/systemd#16576.

@markuskowa
Copy link
Member

Are there any plans to back port this fix to 20.09? I just ran into the same problem, wondering how a build can fail on a 128GB tmpfs:

tmpfs on /tmp type tmpfs (rw,nosuid,nodev,nr_inodes=409600)

from (20.09.2344.a3a3dda3bac)

@poscat0x04
Copy link
Contributor

Is it possible to add a test for this kind of issues? Cause this is a pretty severe issue.

@dasJ
Copy link
Member Author

dasJ commented Jan 13, 2021

I am really sorry for the problems. My testing was based on 20.09 (which does include systemd.additionalUpstreamSystemUnits) so I missed that detail. @ajs124 has a promising fix in #108881 however.

andir pushed a commit that referenced this pull request Jan 14, 2021
#107497 broke booting on many systems that
use tmpOnTmpfs due to the lack of specifying the mount type.

This commit explicitly adds the mount type, which should fix booting
such systems.

The original change may want to be revisited however too.
Gabriella439 pushed a commit to awakesecurity/nixpkgs that referenced this pull request Feb 7, 2022
NixOS#107497 broke booting on many systems that
use tmpOnTmpfs due to the lack of specifying the mount type.

This commit explicitly adds the mount type, which should fix booting
such systems.

The original change may want to be revisited however too.
Gabriella439 pushed a commit to awakesecurity/nixpkgs that referenced this pull request Feb 7, 2022
NixOS#107497 broke booting on many systems that
use tmpOnTmpfs due to the lack of specifying the mount type.

This commit explicitly adds the mount type, which should fix booting
such systems.

The original change may want to be revisited however too.
@vcunat
Copy link
Member

vcunat commented Feb 10, 2022

Apparently this doesn't work as expected: #157512 (not sure if something else changed in the meantime)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants