Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pythonPackages.numpy: consistent mklSupport attribute #74894

Closed
wants to merge 1 commit into from

Conversation

bhipple
Copy link
Contributor

@bhipple bhipple commented Dec 3, 2019

This commit makes numpy respect the mklSupport boolean attribute that is in
use in pytorch and other packages that have an MKL option.

While most packages that have this option depend on Numpy directly or
transitively, not all do; and it's more straightforward to have one config
option, since we generally want this to be applied consistently across our
library stack.

Note that this change as implemented should be backwards-compatible with the
previous numpy method for specifying MKL.

Motivation for this change
Things done
  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS linux)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nix-review --run "nix-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Ensured that relevant documentation is up to date
  • Fits CONTRIBUTING.md.
Notify maintainers

cc @FRidh @jonringer @alexarice @stites @markuskowa @smaret @costrouc @Ericson2314

This commit makes `numpy` respect the `mklSupport` boolean attribute that is in
use in `pytorch` and other packages that have an MKL option.

While most packages that have this option depend on Numpy directly or
transitively, not all do; and it's more straightforward to have one config
option, since we generally want this to be applied consistently across our
library stack.
@FRidh
Copy link
Member

FRidh commented Dec 3, 2019

Let's say another blas implementation pops up. Would we then add another flag to support it?

@bhipple
Copy link
Contributor Author

bhipple commented Dec 12, 2019

If another blas implementation were to pop up, I'd probably prefer renaming this field everywhere to blasProvider and switching on it. I considered doing so now and am happy to change it if you think it's best; was not sure if it'd be over-generalizing to say blasProvider = <package> when really we mean "do what you normally do" vs. "use MKL for BLAS/LAPACK/etc.".

@FRidh
Copy link
Member

FRidh commented Dec 12, 2019

If another blas implementation were to pop up, I'd probably prefer renaming this field everywhere to blasProvider and switching on it.

Right. And should the value be a package, or a string? If its a package, how do we know which blas it is? Right now we know because of the passthru attribute that sets it. Therefore, there's no need for a special attribute.

@stites
Copy link
Member

stites commented Dec 17, 2019

Could another option be some sort of compute-optimized stdenv? The hope is that then we can just assume LAPACK/BLAS are consistent for anything compute-related. GPUs would probably need another stdenv that wraps a cpu-only stdenv. Folks can then overlay with the compute environment of their choice and we can centralize the various hacks used to add nvidia support.

@jonringer
Copy link
Contributor

on the topic of overlays, cross-referencing discussion: #74893

@espg
Copy link

espg commented Mar 12, 2020

...does this new overlay method enable mkl for scipy as well? I've not had a problem getting numpy to use mkl via an override, but scipy always defaults to openblas...

@bhipple bhipple closed this Apr 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants