New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pythonPackages.tensorflow: repair cuda-dependent variant #30058
Conversation
merge the outputs of cudatoolkit locally in the tensorflow derivation, using symlinkJoin Fixes NixOS#29798
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not tested, but looks good.
@abbradar what do you think? |
I don't have access to my trusty nvidia yet, but the problem is that it cannot find the library at runtime, correct? If so, instead of using |
@abbradar adding both to the rpath actually did not work. Apparently this is due to a quirk in the pre-built tensorflow wheel. The |
Ah, no problem then. Looks good to me! |
Cheers, thanks for the feedback!
…On Thu, Oct 5, 2017 at 2:59 PM, Nikolay Amiantov ***@***.***> wrote:
Ah, no problem then. Looks good to me!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#30058 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AABsY0roMNfvGFOer-q-nPDOOaEg9oEuks5spNLMgaJpZM4PsKEX>
.
|
@FRidh Any chance to merge this? Or is someone else in charge? |
Tested that it imports but not runs (trust you have tested this with CUDA). Thank you! |
Similar to NixOS#30058 for TensorFlow. Signed-off-by: Anders Kaseorg <andersk@mit.edu>
* pytorch-0.3 with optional cuda and cudnn * pytorch tests reenabled if compiling without cuda * pytorch: Conditionalize cudnn dependency on cudaSupport Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: Compile with the same GCC version used by CUDA if cudaSupport Fixes this error: In file included from /nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/host_config.h:50:0, from /nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/cuda_runtime.h:78, from <command-line>:0: /nix/store/gv7w3c71jg627cpcff04yi6kwzpzjyap-cudatoolkit-9.1.85.1/include/crt/host_config.h:121:2: error: #error -- unsupported GNU version! gcc versions later than 6 are not supported! #error -- unsupported GNU version! gcc versions later than 6 are not supported! ^~~~~ Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: Build with joined cudatoolkit Similar to #30058 for TensorFlow. Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: 0.3.0 -> 0.3.1 Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: Patch for “refcounted file mapping not supported” failure Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: Skip distributed tests Signed-off-by: Anders Kaseorg <andersk@mit.edu> * pytorch: Use the stub libcuda.so from cudatoolkit for running tests Signed-off-by: Anders Kaseorg <andersk@mit.edu>
merge the outputs of cudatoolkit locally in the tensorflow
derivation, using symlinkJoin
Fixes #29798
Motivation for this change
See #29798
Things done
build-use-sandbox
innix.conf
on non-NixOS)nix-shell -p nox --run "nox-review wip"
./result/bin/
)