Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytorch: Use nativeBuildInputs to specify binary build dependencies. #56991

Merged
merged 1 commit into from Mar 7, 2019

Conversation

teh
Copy link
Contributor

@teh teh commented Mar 6, 2019

Motivation for this change

Closes #56903

This just fixes the build, the onnx converter is still broken.

Things done
  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nox --run "nox-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Assured whether relevant documentation is up to date
  • Fits CONTRIBUTING.md.

@dnaq
Copy link
Contributor

dnaq commented Mar 8, 2019

Cuda support doesn’t work with this fix. I could send a patch that makes Cuda work, but I’m not sure that the dependencies are correct (or rather, I’m pretty sure that they aren’t). In that patch the tests suite doesn’t find Cuda, but it works after installation.

@sveitser
Copy link
Contributor

sveitser commented Mar 9, 2019

@dnaq Could you share how you got CUDA support to work? If I build pytorchWithCuda the resulting package seems to lack CUDA support.

@dnaq
Copy link
Contributor

dnaq commented Mar 9, 2019

@sveitser This is the nix-file that I use. It builds correctly, but for some reason the test-suite doesn't find cuda. It seems to work though.

I haven't sent a pull-request, since I don't really know why it works. It seems like if cuda/cudnn aren't in nativeBuildInputs then pytorchs build system builds everything without cuda support, but if cuda/cudnn aren't in propagatedBuildInputs, then pytorch can't load cuda/cudnn at runtime.

@sveitser
Copy link
Contributor

@dnaq Thank you! Can confirm that this builds with cuda support and that support is there after building,

>>> import torch
>>> torch.cuda.is_available()
True

Why not open a PR with this? It might be work in progress but is IMO much better than the current situation.

@teh
Copy link
Contributor Author

teh commented Mar 10, 2019

@dnaq Thanks for figuring that out. Can you create a PR please? Agree with @sveitser that it's better than the current situation which is broken for some users.

@dnaq
Copy link
Contributor

dnaq commented Mar 10, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pythonPackages.pytorch: build fails to build on NixOS unstable
5 participants