Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nixos,nixpkgs: only build essentials on i686 #27923

Merged
merged 1 commit into from Aug 5, 2017
Merged

Conversation

grahamc
Copy link
Member

@grahamc grahamc commented Aug 4, 2017

Motivation for this change

Only build a bare minimum of packages for i686 for 17.09 and beyond.

NixOS evaluation: https://hydra.nixos.org/jobset/nixos/grahamc-i686 (cuts out about 12,000 jobs)
Nixpkgs evaluation: https://hydra.nixos.org/jobset/nixpkgs/graham-i686 (cuts out about 21,000 jobs... is this right?)

NixOS still builds the same jobs in tested to be sure we don't kill anyone's system, but we no longer span out and build everything.

Nixpkgs won't build i686 at all, except for jobs where it is mandatory like Skype.

cc @FRidh @domenkozar @globin @edolstra

Related discussion on mailing list: https://groups.google.com/forum/#!topic/nix-devel/m7ikFjor24Y

see #27731

Things done

Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers.

  • Tested using sandboxing
    (nix.useSandbox on NixOS,
    or option build-use-sandbox in nix.conf
    on non-NixOS)
  • Built on platform(s)
    • NixOS
    • macOS
    • Linux
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nox --run "nox-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Fits CONTRIBUTING.md.

@mention-bot
Copy link

@grahamc, thanks for your PR! By analyzing the history of the files in this pull request, we identified @edolstra, @Ericson2314 and @domenkozar to be potential reviewers.

@globin
Copy link
Member

globin commented Aug 4, 2017

Looks correct, will merge when either @fpletz or @domenkozar have sanity-checked this.

@avnik
Copy link
Contributor

avnik commented Aug 4, 2017

You can use wine instead jdk for i686 -- for first is most usable 32bit app for x86-64 users, for second is also build most common useable libraries.

(all nixpkgs.emacs)
(all nixpkgs.jdk)
(all allSupportedNixpkgs.emacs)
(all allSupportedNixpkgs.jdk)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like the wrong place to build these packages. Shouldn't we rather move this to nixpkgs instead of having it in NixOS?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given my comment here: #27923 (comment) I'll expand:

My intention wasn't for any of the blocking packages to change. ie: tested remains completely unchanged. The major change is in not fanning out and building the whole package set.

If this isn't what we want that is okay, but to keep this the same, I think the code here does that correctly.

@@ -2,7 +2,7 @@
the load on Hydra when testing the `stdenv-updates' branch. */

{ nixpkgs ? { outPath = (import ../../lib).cleanSource ../..; revCount = 1234; shortRev = "abcdef"; }
, supportedSystems ? [ "x86_64-linux" "i686-linux" "x86_64-darwin" ]
, supportedSystems ? [ "x86_64-linux" "x86_64-darwin" ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't the plan to build the nixos-small channel for i686-linux but maybe remove it as constituents of the tested job?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That isn't what I was expecting, but we could. I was expecting nixos-unstable to block on i686 but no -small's, and no wide package tree -- just essentials.

@globin
Copy link
Member

globin commented Aug 5, 2017

I generally agree with @fpletz concerns, but this is a step in the right direction and doesn't block us from refining this further (thinking more about aarch64, splitting etc.), so I'm merging this.

@globin globin merged commit 7d0b001 into NixOS:master Aug 5, 2017
@@ -54,43 +57,36 @@ let
jobs.manual
jobs.lib-tests
jobs.stdenv.x86_64-linux
jobs.stdenv.i686-linux
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know I'm late, but I'd personally keep stdenv in there, as we still want to support some packages built by it (e.g. wine).

@grahamc
Copy link
Member Author

grahamc commented Aug 6, 2017 via email

@vcunat
Copy link
Member

vcunat commented Aug 6, 2017

That doesn't matter; I'll push that change myself, but first I'll have a look why all aarch64-linux jobs were cut as well. (That's why there's significantly more removed jobs on nixpkgs than on nixos.)

vcunat added a commit that referenced this pull request Aug 6, 2017
The typo removed also all aarch64-linux on Hydra.
@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

We might reconsider what to do about i686 nixos tests of big stuff. I now removed plasma5.i686-linux from tested jobset on master, and we might want to avoid other tests on i686 that have heavy build-time closures, e.g. xmonad. (EDIT: I would probably keep releasing the i686 minimal ISO.)

@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

Actually, there's most likely no hurry for such actions, as since this PR the x86_64+i686 queue tends to be clearly the least loaded of all. (Darwin has the longest queue now, almost all the time.)

@oxij
Copy link
Member

oxij commented Sep 11, 2017 via email

@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

It doesn't build every commit, and Hydra is quite a strong set of machines now. (You can get an idea by looking at https://hydra.nixos.org/machines )

The largest problem here are changes in stdenv and other "mass-rebuild" packages. We have quite a lot of those nowadays, and doing one staging rebuild in two weeks would probably be an inconveniently slow feedback (for humans fixing the problems there).

The worst part of the testing is that half of people isn't even able to build on darwin or aarch64 in any direct way.

@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

  • Load averages are considered by the -l${NIX_BUILD_CORES} option to make and equivalents. I'd personally also try a jobserver, as the current way is prone to fluctuations, but it doesn't seem like it could make a significant difference.
  • ccache could help significantly, but (1) it's slightly risky for purity (IIRC Eelco might oppose it), and in the current case of quite many build slaves it wouldn't help that much on Hydra, assuming the cache would be local (and distributed ccache would be quite a notrivial setup).

@oxij
Copy link
Member

oxij commented Sep 11, 2017 via email

@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

Wouldn't it be nice if nix could "partially evaluate" derivations (remove comments, normalize whitespace, substitute non-recursive shell functions, etc) before assigning them a hash.

That is done – the *.drv files are quite a lot canonicalized. With intensional store I expect we could save the propagation of mass rebuilds in case of bit-equal outputs, but that's a far-away dream...

Another common nix artifact I see frequently: * run one nix-store --realise, starts working as expected, * run another one with the same derivation, it starts building something too, I would expect it to wait with "no build slots".

These cases always wait on my machine, even in case of using distributed builds. (iff the hashes are the same)

I'm unsure about a good way to integrate ccache into stdenv myself.

We have ccacheStdenv, but I have no idea if it has bitrotten or something.

ccache over NFS could be good, but the build farm currently uses a mix of slaves that aren't co-located, so each compiler invocation would necessarily induce a noticeable latency. That might not be a problem by itself, but I expect it would be more fragile and we would probably want to increase the load, as much more waiting for network would be expected.

We might even serve the ccache, similarly to current binary cache. (!!)

@oxij
Copy link
Member

oxij commented Sep 11, 2017 via email

@vcunat
Copy link
Member

vcunat commented Sep 11, 2017

That is done – the *.drv files are quite a lot canonicalized.

Not to any reasonable degree. I just checked. Adding a single empty line
to any shell script in stdenv/generic causes a mass rebuild.

I remember stopping myself from fixing typos in comments of expressions
I read (several times) because of the mass rebuild that caused.

Yes, but given that the builder can be any program, nix can't assume any kind of semantics in general.

It also wouldn't work in general for packages because many packages refer to themselves [...]

The intensional store proposal does count on self-references not changing the hashes. It's all in that old Eelco's thesis.

We might even serve the ccache, similarly to current binary cache. (!!)

That could be cool, but the feasibility of it needs actual latency measurements.

ping cache.nixos.org would be expected a few dozen milliseconds, which seems usable to me, especially given enough build-parallelism, if you have a good network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants