Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic GC if we drop low. Limits debatable... #61

Closed
wants to merge 1 commit into from

Conversation

srhb
Copy link

@srhb srhb commented Nov 2, 2018

In response to a large build dying from out-of-disk conditions
NixOS/nixpkgs#49442 (comment)

@srhb
Copy link
Author

srhb commented Nov 2, 2018

@grahamc ping

@srhb
Copy link
Author

srhb commented Nov 2, 2018

Re. the limits, the question is really how much speed/load saving is won from doing a small gc. If it's slow and expensive anyway (ie. if the constant cost of doing any size GC is high,) we should increase max-free a lot. This should probably receive testing on a cross section of the machine types, but this is a starting point.

@vcunat
Copy link
Member

vcunat commented Nov 2, 2018

My feeling from GC on rotating drives is that vast majority of time is taken by unlinking files. (I guess tweaking commit=foo mount option might help overall, but I'm digressing.)

@srhb
Copy link
Author

srhb commented Nov 2, 2018

@vcunat That sounds like a small GC is indeed preferable. :)

@edolstra
Copy link
Member

edolstra commented Nov 2, 2018

Seems like a good way to give the auto-GC feature some testing :-)

@vcunat
Copy link
Member

vcunat commented Nov 2, 2018

BTW, on t2a I've seen running out of inodes while having plenty of free space (more than half). I plan to work around that just by collecting everything relatively often (once a day or so), since the machine won't benefit much from caching. Only the metrics job is ran there and that's rather special; it seems unlikely to happen on other machines.

@zimbatm
Copy link
Member

zimbatm commented Dec 16, 2018

I want to deploy this soon. It requires some careful manual rolling deployment to avoid breaking all of the nodes.

@davidak
Copy link
Member

davidak commented Sep 16, 2020

Any update on this?

@zimbatm
Copy link
Member

zimbatm commented Sep 17, 2020

I haven't touched the NixOS infra in a long time now. Somebody else should probably take over.

@srhb srhb closed this Apr 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants