New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nvidia: Preliminary nVidia/AMD PRIME and dynamic power management support #100519
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skim
@@ -63,6 +63,15 @@ in | |||
''; | |||
}; | |||
|
|||
hardware.nvidia.powerManagement.finegrained = mkOption { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RTD3 power management is experimental so I would probably not include it. Nor is it a pain to set up as probably editing Xorg configuration so it might be better to exclude it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen several people ask about it, it's supposed to work with Intel iGPUs, and it's also supposed to work for APUs "shortly". I thought I'd get a head start on supporting it.
It's still marked as experimental in the documentation, but the code here shouldn't need any change as it stabilizes, except perhaps to remove the udev exclusions.
display = offloadCfg.enable; | ||
modules = optional (igpuDriver == "amdgpu") [ pkgs.xorg.xf86videoamdgpu ]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this needed? If you have a AMD GPU this should be prerequisite. I'm not familiar with this but isn't there an open source and closed source driver. Is it compatible with both and/or conflicts are settled if both are included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proprietary driver is amdgpu-pro. It doesn't work basically at all, for anyone -- perhaps a slight exaggeration, but I'd be astonished to see it in use.
The modesetting driver is bundled with xorg; amdgpu isn't. The usual user interface for fixing that is adding it to videoDrivers, but as I've explained, doing so will break PRIME. It has to be added to modules, or the AMD gpu won't work at all.
Adding driver modules this way does nothing by itself, so a user who explicitly wants to install amdgpu-pro should be able to do so. Though I don't think that would work in a PRIME configuration, and this explicitly selects the amdgpu driver elsewhere. Note that the module doesn't let Intel users choose the intel driver instead of the modesetting one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, alright I guess, but that doesn't seem to stop anyone from adding "amdgpu" into videoDrivers
which would end up breaking it still based off what I've understood, which is probably the way most people would start to avoid to get the display working first.
If I've understood correctly, an assert to make sure amdgpu
isn't in videoDrivers
should be added if someone uses PRIME
.
It's not that Intel users can't choose the intel xf86video driver, but the docs explicitly state that it's for modesetting, so it probably wouldn't work anyway if they chose to use the old driver. Though the Arch wiki states AMD GPU's are supported based off the docs, the link doesn't seem to show any proof of that (unless amdgpu implements modesetting but as it's own driver in which case it would be implied).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amdgpu implements modesetting, yes.
'' + optionalString offloadCfg.enable '' | ||
Option "AllowNVIDIAGPUScreens" | ||
''; | ||
|
||
services.xserver.displayManager.setupCommands = optionalString syncCfg.enable '' | ||
# Added by nvidia configuration module for Optimus/PRIME. | ||
${pkgs.xorg.xrandr}/bin/xrandr --setprovideroutputsource modesetting NVIDIA-0 | ||
${pkgs.xorg.xrandr}/bin/xrandr --setprovideroutputsource ${igpuDriver} NVIDIA-0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration looks to be basically the same for Intel and AMD, so not sure why you're adding logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The AMD config doesn't use the modesetting driver, and calling it that would be misleading.
Granted, Sync doesn't work on my hardware at all. I have no way of testing this, so if you believe it should e unconditionally be "modesetting" I'll remove the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The arch wiki uses radeon
instead of amdgpu
in it's example, though I'm not sure what it is for AMD. This should just be a projection from different sinks from xrandr --listproviders
, i.e., Provider 0 -> 1
Provider 0: id: 0x46 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 4 associated providers: 0 name:modesetting
Provider 1: id: 0x258 cap: 0x0 crtcs: 0 outputs: 0 associated providers: 0 name:NVIDIA-G0
I'm not sure about naming, if they both use modesetting
internally then I would leave it as modesetting
otherwise it's fine to leave it.
Firstly, thank you @Baughn for putting this together. I'm using this PR to get NVIDIA PRIME working on my HP Pavilion Gaming 15-ec1047ax. I have an AMD Renoir-based iGPU and an NVIDIA GeForce GTX 1650 dGPU. The internal display is connected to the iGPU and the HDMI port is connected to the dGPU. Here's my annotated configuration:
I'm summary, I'd like for this to be merged ;) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine, untested since I don't have AMD.
message = "Sync precludes powering down the NVIDIA GPU."; | ||
} | ||
{ | ||
assertion = cfg.powerManagement.enable -> offloadCfg.enable; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be cfg.powerManagement.finegrained
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -205,6 +245,7 @@ in | |||
'' | |||
BusID "${pCfg.nvidiaBusId}" | |||
${optionalString syncCfg.allowExternalGpu "Option \"AllowExternalGpus\""} | |||
${optionalString cfg.powerManagement.finegrained "Option \"NVreg_DynamicPowerManagement=0x02\""} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
per the docs here, this is in the wrong spot. this option should be set on the nvidia
kernel module during it's initialization. it's not an Xorg option for initializing the device:
[this feature] can be enabled or disabled via the NVreg_DynamicPowerManagement nvidia.ko kernel module parameter.
this setting also defaults to on for Ampere and newer cards as of this driver version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see #174057
A related PR: #174058 Reviews are welcome, |
Motivation for this change
This permits nVidia PRIME to be used in AMD/nVidia configurations, such as on the Zephyrus G14. It also adds a flag for dynamic power management, which should (when fully baked) allow the nVidia dGPU to be fully powered off while unused.
Caveats:
Most (non-Intel?) machine configurations are not supported by the Dynamic Power Management code yet. This includes the G14. Enabling it in fact increases power draw slightly. (The same configuration is hilariously buggy on Windows. nVidia appears to have made the safe choice in not permitting it on Linux.)
Including "nvidia" in xserver.videoDrivers is required, and should probably be done by default. This PR is only concerned with supporting the G14's hardware, but I think that would be a good idea. However:
videoDrivers isn't deduplicated, and each entry creates a separate Device section in xorg.conf. This means including "amdgpu" or "modesetting" in videoDrivers breaks PRIME in an unobvious fashion, permitting X11/Wayland to appear to work, but causing any invocation of the dGPU to throw GLX errors.
Some people may genuinely have multiple GPUs of the same brand, so unconditionally deduplicating them is no good either.
This is all equally true for Intel-based PRIME, and has no bearing on the PR. Just food for thought.
Tested on a Zephyrus G14.
This roughly matches Windows. (Insert rant on malfunctioning dynamic power management here. I'm sure it'll all be in order in a couple of months.)
Not tested, because my hardware doesn't support it:
Things done
sandbox
innix.conf
on non-NixOS linux)nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
./result/bin/
)nix path-info -S
before and after)