New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch: fix CUDA support #57438
pytorch: fix CUDA support #57438
Conversation
This commit fixes CUDA support when building with `allowUnfree = true` and `cudaSupport = true`. The previous change to pytorch.nix built, but at runtime cuda support didn't work. This is a work in progress, the test-suite still doesn't find CUDA, so no CUDA-tests are made of the compiled package. Also the list of packages in `nativeBuildInputs` and `propagatedBuildInputs` are probably wrong, due to a lack of understanding from my side.
] ++ lib.optionals cudaSupport [ cudatoolkit_joined cudnn ] | ||
++ lib.optionals stdenv.isLinux [ numactl ]; | ||
|
||
propagatedBuildInputs = [ | ||
cffi | ||
numpy.blas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unlikely to be correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
@@ -79,20 +79,19 @@ in buildPythonPackage rec { | |||
|
|||
nativeBuildInputs = [ | |||
cmake | |||
numpy.blas |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this probably needs to be in both nativeBuildInputs and buildInputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest I don’t really know what packages need to be in buldInputs, nativeBuildInputs or propagatedBuildInputs. This is just something that builds and solves my immediate need. I’d be happy to modify it as needed, but given that each build of pytorch takes a couple of hours I don’t really have the time for a lot of trial and error.
Thanks @dnaq! Here is what I have tested:
Could you please add this as a comment on the top of the package?
|
I tried a nix-shell on 7d0db6a The build on this PR fails for me at some other dependency. |
I confirmed that moving
Possibly other changes may be desirable for cross-compilation; all I know is that this one is necessary for the normal case. Submitted as #60002. As for @smatting’s comment:
I didn’t need any such configuration. I simply configured |
Seems like #60002 solves this issue in a more idiomatic way. |
This commit fixes CUDA support when building with
allowUnfree = true
andcudaSupport = true
. The previous change to pytorch.nixbuilt, but at runtime cuda support didn't work.
This is a work in progress, the test-suite still doesn't find CUDA, so no CUDA-tests are made of the compiled package.
Also the list of packages in
nativeBuildInputs
andpropagatedBuildInputs
are probably wrong, due to a lack of understanding from my side.Motivation for this change
CUDA support didn't work in the previous version.
Things done
sandbox
innix.conf
on non-NixOS)nix-shell -p nox --run "nox-review wip"
./result/bin/
)nix path-info -S
before and after)