Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virtualbox: 5.2.28 -> 6.0.6 #60943

Merged
merged 1 commit into from May 10, 2019
Merged

virtualbox: 5.2.28 -> 6.0.6 #60943

merged 1 commit into from May 10, 2019

Conversation

ambrop72
Copy link
Contributor

@ambrop72 ambrop72 commented May 4, 2019

Quite some fixing was needed to get this to work. See the discussion and comments in commits and comments.

Things done

Tested host and guest (including desktop) on 19.03, no breakage observed.

  • Tested using sandboxing (nix.useSandbox on NixOS, or option sandbox in nix.conf on non-NixOS)
  • Built on platform(s)
    • NixOS
    • macOS
    • other Linux distributions
  • Tested via one or more NixOS test(s) if existing and applicable for the change (look inside nixos/tests)
  • Tested compilation of all pkgs that depend on this change using nix-shell -p nix-review --run "nix-review wip"
  • Tested execution of all binary files (usually in ./result/bin/)
  • Determined the impact on package closure size (by running nix path-info -S before and after)
  • Assured whether relevant documentation is up to date
  • Fits CONTRIBUTING.md.

@flokli
Copy link
Contributor

flokli commented May 5, 2019

@ambrop72 thanks a lot for your work, especially for upstreaming the patches!

However, there's already a WIP PR at #53120 - which is blocked on getting the virtualbox VM tests to work again.

I tried running the VM test based on your PR, however, they fail with a different error than there:

machine: must succeed: su - alice -c 'VBoxManage setextradata test2 VBoxInternal/PDM/HaltOnReset 1'                                                                                  
machine# VBoxManage: error: Could not launch a process for the machine 'simple' (VERR_FILE_NOT_FOUND)                                                                                
machine# VBoxManage: error: Details: code VBOX_E_IPRT_ERROR (0x80bb0005), component MachineWrap, interface IMachine, callee nsISupports                                              
machine# VBoxManage: error: Context: "LaunchVMProcess(a->session, sessionType.raw(), Bstr(strEnv).raw(), progress.asOutParam())" at line 726 of file VBoxManageMisc.cpp        

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 5, 2019

Don't know why I didn't find that, I remember searching. Comparison:

  • I know about VirtualBox -> VirtualBoxVM setuid, was planning to send that separately as I assumed it's for 5.2 too (not sure though), I will add it here.
  • fix_kbuild.patch here is the same thing as fix-kernel-module-include.patch in virtualbox: 5.2.28 -> 6.0.6 #53120.
  • fix_module_makefile_sed.patch which is not in virtualbox: 5.2.28 -> 6.0.6 #53120 is needed for non-hardening setup to work, otherwise runtime errors result when starting a VM.
  • fix_printk_test.patch here which is not in virtualbox: 5.2.28 -> 6.0.6 #53120 is just cosmetic (prevents interesting kernel messages).
  • virtualbox: 5.2.28 -> 6.0.6 #53120 removed i686-linux platform, not sure why, maybe just because nobody tests it?
  • virtualbox: 5.2.28 -> 6.0.6 #53120 doesn't have the guest-additions zlib fix, this problem can be seen when using with nixops (missing library error).
  • virtualbox: 5.2.28 -> 6.0.6 #53120 removed fix_kernincl.patch in guest additions, I suppose it's unneeded, I will try removing it too.
  • I think my adjustment of qtx11extras.patch is better (will break less in the future and avoids unneeded linking to Qt5X11Extras).

How does one run the VM test?

@flokli
Copy link
Contributor

flokli commented May 5, 2019

No worries! Thanks a lot for taking the time! I hope not too much effort was duplicated…

Given patches were already sent to upstream as used here, it might makes sense to continue in this PR.

Could you incorporate the other changes from #53120 in here, and copy over the PR checkboxes?


#53120 removed i686-linux platform, not sure why, maybe just because nobody tests it?

Oracle mentions 32-bit support as being discontinued:
from https://www.virtualbox.org/wiki/Downloads:

If you're looking for the latest VirtualBox 5.2 packages, see VirtualBox 5.2 builds. Please also use version 5.2 if you still need support for 32-bit hosts, as this has been discontinued in 6.0. Version 5.2 will remain supported until July 2020.

I'm not sure if they actively dropped any code, but given even they don't test it, it probably makes sense to drop support on our side, too.

How does one run the VM test?

nixos vm tests can be run via nix-build -A nixosTests.virtualbox (or nix-build nixos/tests/virtualbox.nix --arg enableUnfree true -I nixpkgs=$PWD if you want to run the unfree tests, too).

As explained in #53120, I guess the problem is somehow related due to the tests being run inside a qemu vm, but we can't really run tests elsewhere currently (see #5241 for that discussion)…

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 5, 2019

I've added all relevant changes from #53120 to this PR.

I don't have the same error with the NixOS tests but a guru medidation from virtualbox in the kvm guest:

machine# [   32.548367] GUEST-headless[1188]: [    6.559399] io scheduler noop registered
machine# [   32.558857] GUEST-headless[1188]: [    6.569958] io scheduler deadline registered
machine# [   32.571355] GUEST-headless[1188]: [    6.581486] io scheduler cfq registered (default)
machine# [   32.588894] GUEST-headless[1188]: [    6.595775] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
machine# [   32.606390] GUEST-headless[1188]: [    6.611486] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
machine# [   32.625210] GUEST-headless[1188]: [    6.631288] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
machine# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
machine# !!
machine# !!         VCPU0: Guru Meditation -2403 (VERR_TRPM_DONT_PANIC)
machine# !!
machine# !! TRAP=0e ERRCD=0000000000000010 CR2=00000000a082bf60 EIP=a082bf60 Type=0 cbInstr=ff
machine# !! EIP is not in any code known to VMM!
machine# !!
machine# !!
machine# !!

which is then followed by tons of hex values. This error is the same as encountered in #53120. I have then found this, which I believe to be about the same bug in VirtualBox: https://forums.virtualbox.org/viewtopic.php?f=6&t=91974. In that case they got an almost identical guru meditation, with VirtualBox using SW virtualization (like these NixOS tests do, that's why they must run a 32-bit guest) which happens when a program opens a serial port (but with windows host and guest). That the last kernel message before our crash is about initializing the serial port suggests this is the same problem.

I have then tried enabling nested virtualization in KVM:

  • On my NixOS sytem: boot.extraModprobeConfig = "options kvm-intel nested=Y";
  • In nixpkgs/nixos/tests/virtualbox.nix: added virtualisation.qemu.options = ["-cpu" "host"]; just below virtualisation.memorySize = 2048;.

That actually went surprisingly well, it does seem like VirtualBox is successfully using hardware virtualization within KVM (but I would have to confirm by looking at the VirtualBox machine log). But there is an error with mounting vboxsf in the guest initrd:

machine# [   57.708212] GUEST-test1[1388]: [   30.960936] ata29: SATA link down (SStatus 0 SControl 300)
machine# [   58.070847] GUEST-test1[1388]: [   31.328555] ata30: SATA link down (SStatus 0 SControl 300)
machine# [   58.136735] GUEST-test1[1388]: [   31.395590] sd 0:0:0:0: [sda] 204800 512-byte logical blocks: (105 MB/100 MiB)
machine# [   58.182420] GUEST-test1[1388]: [   31.451239] sd 0:0:0:0: [sda] Write Protect is off
machine# [   58.221397] GUEST-test1[1388]: [   31.486002] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
machine# [   58.273606] GUEST-test1[1388]: [   31.523178] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
machine# [   58.315176] GUEST-test1[1388]: [   31.590331]  sda: sda1
machine# [   58.349428] GUEST-test1[1388]: [   31.615236] sd 0:0:0:0: [sda] Attached SCSI disk
machine# [   58.531087] GUEST-test1[1388]: + kbd_mode -u -C /dev/console[   31.796150] stage-1-init: + kbd_mode -u -C /dev/console
machine# [   58.551821] GUEST-test1[1388]:
machine# [   58.575680] GUEST-test1[1388]: %Gkbd_mode: KDSKBMODE: Inappropriate ioctl for device
machine# [   58.611712] GUEST-test1[1388]: + printf '\033%%G'[   31.872347] random: lvm: uninitialized urandom read (4 bytes read)
machine# [   58.630747] GUEST-test1[1388]:
machine# [   58.660727] GUEST-test1[1388]: [   31.918082] stage-1-init: kbd_mode: KDSKBMODE: Inappropriate ioctl for device
machine# [   58.683391] GUEST-test1[1388]: + loadkmap
machine# [   58.701572] GUEST-test1[1388]: + echo 'starting device mapper and LVM...'
machine# [   58.713662] GUEST-test1[1388]: starting device mapper and LVM...
machine# [   58.731231] GUEST-test1[1388]: + lvm vgchange -ay
machine# [   58.751559] GUEST-test1[1388]: [   32.020318] stage-1-init: + printf '\033%%G'
machine# [   58.783469] GUEST-test1[1388]: [   32.053469] stage-1-init: + loadkmap
machine# [   58.825016] GUEST-test1[1388]: [   32.084754] stage-1-init: + echo 'starting device mapper and LVM...'
machine# [   58.867346] GUEST-test1[1388]: [   32.128324] stage-1-init: starting device mapper and LVM...
machine# [   58.903963] GUEST-test1[1388]: [   32.170289] stage-1-init: + lvm vgchange -ay
machine# [   58.939453] GUEST-test1[1388]: + test -n [   32.211719] stage-1-init: + test -n
machine# [   58.958317] GUEST-test1[1388]:
machine# [   58.977025] GUEST-test1[1388]: + test -e /sys/power/resume -a -e /sys/power/disk
machine# [   58.987728] GUEST-test1[1388]: + test -n
machine# [   58.994578] GUEST-test1[1388]: + test -n
machine# [   59.027920] GUEST-test1[1388]: [   32.281629] stage-1-init: + test -e /sys/power/resume -a -e /sys/power/disk
machine# [   59.082373] GUEST-test1[1388]: + mkdir -p /mnt-root[   32.340608] vboxsf: No mount data. Is mount.vboxsf installed (typically in /sbin)?
machine# [   59.126313] GUEST-test1[1388]: [   32.395357] vbsf_read_super_aux err=-22
machine# [   59.144763] GUEST-test1[1388]:
machine# [   59.150752] GUEST-test1[1388]: + exec
machine# [   59.159521] GUEST-test1[1388]: + read -u 3 mountPoint
machine# [   59.168249] GUEST-test1[1388]: + read -u 3 device
machine# [   59.175655] GUEST-test1[1388]: + read -u 3 fsType
machine# [   59.184951] GUEST-test1[1388]: + read -u 3 options
machine# [   59.199342] GUEST-test1[1388]: [   32.472041] stage-1-init: + test -n
machine# [   59.220147] GUEST-test1[1388]: + pseudoDevice=
machine# [   59.228767] GUEST-test1[1388]: + pseudoDevice=1
machine# [   59.235017] GUEST-test1[1388]: + test -z 1
machine# [   59.253576] GUEST-test1[1388]: + udevadm settle[   32.526592] stage-1-init: + test -n
machine# [   59.269318] GUEST-test1[1388]:
machine# [   59.274870] GUEST-test1[1388]: + '[' -n  ]
machine# [   59.316404] GUEST-test1[1388]: + mountFS vboxshare / defaults vboxsf[   32.574887] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
machine# [   59.323288] GUEST-test1[1388]: [   32.574887]
machine# [   59.384964] GUEST-test1[1388]: [   32.643752] CPU: 0 PID: 1 Comm: init Tainted: G           O    4.9.171 #1-NixOS
machine# [   59.652395] GUEST-test1[1388]: [   32.902316] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
machine# [   59.748346] GUEST-test1[1388]: [   32.966723]  edcc3f04 de8c85cc edcd8000 decb4e60 edcc3f1c de7599c5 decb4e60 edcd8000
machine# [   59.837885] GUEST-test1[1388]: [   33.048874]  decb4e60 edd03ce4 edcc3f60 de667e68 dec013c8 00000100 00000000 00000000
machine# [   59.914608] GUEST-test1[1388]: [   33.136403]  00000249 edcd843c 01000000 edfcc040 de66822f 00000003 edcc3f4c edcc3f4c
machine# [   59.939345] GUEST-test1[1388]: [   33.215372] Call Trace:
machine# [   59.971934] GUEST-test1[1388]: [   33.241521]  [<de8c85cc>] dump_stack+0x58/0x7c
machine# [   60.005227] GUEST-test1[1388]: [   33.275323]  [<de7599c5>] panic+0x94/0x1d1
machine# [   60.040485] GUEST-test1[1388]: [   33.307803]  [<de667e68>] do_exit+0xa38/0xa40
machine# [   60.075254] GUEST-test1[1388]: [   33.341565]  [<de66822f>] ? SyS_waitpid+0x6f/0xe0
machine# [   60.110461] GUEST-test1[1388]: [   33.376187]  [<de667ee7>] do_group_exit+0x37/0x90
machine# [   60.152886] GUEST-test1[1388]: [   33.416403]  [<de665df0>] ? task_stopped_code+0x60/0x60
machine# [   60.196942] GUEST-test1[1388]: [   33.458489]  [<de667f56>] SyS_exit_group+0x16/0x20
machine# [   60.235829] GUEST-test1[1388]: [   33.500336]  [<de6037a9>] do_fast_syscall_32+0x99/0x170
machine# [   60.273837] GUEST-test1[1388]: [   33.539722]  [<deb13902>] sysenter_past_esp+0x47/0x75
machine# [   60.322656] GUEST-test1[1388]: [   33.576092] Kernel Offset: 0x1d600000 from 0xc1000000 (relocation range: 0xc0000000-0xf07effff)
machine# [   60.363095] GUEST-test1[1388]: [   33.634888] Rebooting in 1 seconds..
machine# [   61.413719] GUEST-test1[1388]: [   34.685497] ACPI MEMORY or I/O RESET_REG.

This needs to be debugged, I think it's just a regression due to the VirtualBox update. If this works out then at least the tests will work with some manual intervention (enabling nested KVM on Hydra builders is probably not a smart idea because the feature is experimental).

@ambrop72 ambrop72 force-pushed the virtualbox-6 branch 2 times, most recently from ea7fa57 to c12e12a Compare May 5, 2019 23:30
@ambrop72
Copy link
Contributor Author

ambrop72 commented May 5, 2019

With nested KVM I have gotten several tests (probably all others) to work except simple-gui. It is timing out wating for the VM to start. Maybe sending just an enter press no longer works to start the VM. The output says screenshots were made but since the build fails I think they are lost. Build output.

@ambrop72 ambrop72 force-pushed the virtualbox-6 branch 2 times, most recently from 0c68d98 to c682682 Compare May 6, 2019 00:10
@ambrop72
Copy link
Contributor Author

ambrop72 commented May 6, 2019

With the current state of this PR, the tests pass for me. The precondition is enabling nested KVM on the build machine, by adding options kvm-intel nested=Y (or kvm-amd) to boot.extraModprobeConfig.

Needed changes were:

  • The vboxsf mount error (and resulting kernel panic) was fixed by ensuring that mount.vboxsf has glibc in RUNPATH. The actual errors from mount were somehow lost in buffers once the kernel panicked from init exiting. Along the way I refactored the additions package, with a side effect that RUNPATH stripping is now done as it should be.
  • Two issues due to dlopen() of libdbus failing (in VBoxService and systemd-detect-virt) were fixed by adding libdbus to LD_LIBRARY_PATH.
  • The GUI test was fixed by updating the key sequence sent to the VirtualBox Manager to start the VM.
  • Added the needed cpu option for qemu-kvm to enable nested virtualization, and comments to the top of the file explaining the requirement for the build machine.

Copy link
Contributor

@flokli flokli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some nitpicks

sha256 = "0hjx99pg40wqyggnrpylrp5zngva4xrnk7r90i0ynrqc7n84g9pn";
})
# https://www.virtualbox.org/ticket/18620
./fix_kbuild.patch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead shipping patches via nixpkgs, can we use fetchpatch here and fetch from their issue tracker?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're very small and we have other patches right here in nixpkgs, so I think it doesn't make sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nixpkgs policy is to not include patches when possible. Many of the patches in Nixpkgs are added under the same "well there are other patches in here...", but it is generally wrong. When possible (like in most of these cases) the patches should be included via fetchpatch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, I'm grateful of your work here -- this PR is in great shape, despite these patches being checked in. In the future, please fetch patches (and @flokli please be a bit more insistent :) )

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, since I managed to not say it in the first two messages (d'oh):

Thank you so much for your work in Nixpkgs, patching, and working with upstream to get this going. I really appreciate it, it makes a big difference to NixOS!


# Build kernel modules
export INSTALL_MOD_PATH=$out
configurePhase = "true";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doConfigure = "false";

@flokli
Copy link
Contributor

flokli commented May 8, 2019

Awesome work!

I'm not sure if we can get tests requiring nested kvm support enabled on hydra soon (might need a new feature flag), but this at least makes it possible for maintainers to run the tests manually.

Can we switch between "hardware virtualization, 64 bit guest" and "software virtualization, 32 bit guest" via some argument, similar to enableUnfree?

Did you open an issue in upstreams issue tracker about the broken software virtualization support?

@@ -29,6 +45,10 @@ let
"${pkgs.dbus.daemon}/bin/dbus-daemon" --fork \
--config-file="${pkgs.dbus.daemon}/share/dbus-1/system.conf"

# Some programs (e.g. VBoxService, systemd-detect-virt) load libdbus
# using dlopen(), make sure that works.
export LD_LIBRARY_PATH=${pkgs.dbus.lib}/lib
Copy link
Contributor

@flokli flokli May 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be debugged further, whether it's broken on NixOS too (and whether we need to add dbus to rpath)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured that when I refactored the additions package to not skip the fixup phase which includes RUNPATH stripping, the dbus path got stripped out since there was no direct dependency on dbus. I will fix this in a new commit by adding the dbus path in postFixup (and removing dbus from the first round of patchelf because none of the binaries directly depend on dbus). This leaves VBoxClient and VBoxService with dbus in RUNPATH (as found by grepping binaries for libdbus-1.so). So I think that explains why VBoxService broke in the initrd.

As far as systemd-detect-virt is concerned, I suspect what happened is that because mount.vboxsf previously ended up with a bunch of unnecessary RUNPATH entries, copy_bin_and_libs "${guestAdditions}/bin/mount.vboxsf" resulted in libdbus being copied to /lib (or something equivalent, not sure what exactly it does) and found by systemd-detect-virt. What's interesting is that just running systemd-detect-virt in NixOS does not result in a libdbus error and strace shows that it does not load libdbus (but I see no point in investigating why).

So I will remove the LD_LIBRARY_PATH setting in the tests from its current place and add it just for the systemd-detect-virt call. It would not work without this now because because mount.vboxsf no longer has dbus in RUNPATH (it doesn't need it) and we don't copy_bin_and_libs anything that does.

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 8, 2019

I added another commit with the latest fixes. Tests still pass. Is there anything else to do before it can be merged?

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 8, 2019

I was mistaken about systemd-detect-virt needing libdbus, that is not the case. I must have confused the error messages (the test output is quite messy). So I updated the latest commit to completely remove the LD_LIBRARY_PATH in the tests, which still pass.

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 8, 2019

Should I merge these commits all into one, so they can easily be cherry-picked by anyone who wants it on release versions?

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 8, 2019

Can we switch between "hardware virtualization, 64 bit guest" and "software virtualization, 32 bit guest" via some argument, similar to enableUnfree?

I will add a flag to control whether HW- or SW-virt is used, and what bitness guest is used, independently, with an assert that SW-virt requires 32-bit guest.

@ambrop72
Copy link
Contributor Author

ambrop72 commented May 8, 2019

I will try to reproduce the SW virt problem, but I am not sure I will be successful.

@flokli
Copy link
Contributor

flokli commented May 8, 2019

Should I merge these commits all into one, so they can easily be cherry-picked by anyone who wants it on release versions?

I'm a bit confused with the github temporary anomaly, but yes please - we probably might want to cherry-pick this to 19.03 too, or should at least have the option to.

@flokli flokli mentioned this pull request May 8, 2019
10 tasks
@devhell
Copy link
Contributor

devhell commented May 9, 2019

Amazing work @ambrop72, thanks for digging into this further. 🍺

@flokli
Copy link
Contributor

flokli commented May 9, 2019 via email

Quite some fixing was needed to get this to work.

Changes in VirtualBox and additions:

- VirtualBox is no longer officially supported on 32-bit hosts so i686-linux is removed from platforms
  for VirtualBox and the extension pack. 32-bit additions still work.

- There was a refactoring of kernel module makefiles and two resulting bugs affected us which had to be patched.
  These bugs were reported to the bug tracker (see comments near patches).

- The Qt5X11Extras makefile patch broke. Fixed it to apply again, making the libraries logic simpler
  and more correct (it just uses a different base path instead of always linking to Qt5X11Extras).

- Added a patch to remove "test1" and "test2" kernel messages due to forgotten debugging code.

- virtualbox-host NixOS module: the VirtualBoxVM executable should be setuid not VirtualBox.
  This matches how the official installer sets it up.

- Additions: replaced a for loop for installing kernel modules with just a "make install",
  which seems to work without any of the things done in the previous code.

- Additions: The package defined buildCommand which resulted in phases not running, including RUNPATH
  stripping in fixupPhase, and installPhase was defined which was not even run. Fixed this by
  refactoring using phases. Had to set dontStrip otherwise binaries were broken by stripping.
  The libdbus path had to be added later in fixupPhase because it is used via dlopen not directly linked.

- Additions: Added zlib and libc to patchelf, otherwise runtime library errors result from some binaries.
  For some reason the missing libc only manifested itself for mount.vboxsf when included in the initrd.

Changes in nixos/tests/virtualbox:

- Update the simple-gui test to send the right keys to start the VM. With VirtualBox 5
  it was enough to just send "return", but with 6 the Tools thing may be selected by
  default. Send "home" to reliably select Tools, "down" to move to the VM and "return"
  to start it.

- Disable the VirtualBox UART by default because it causes a crash due to a regression
  in VirtualBox (specific to software virtualization and serial port usage). It can
  still be enabled using an option but there is an assert that KVM nested virtualization
  is enabled, which works around the problem (see below).

- Add an option to enable nested KVM virtualization, allowing VirtualBox to use hardware
  virtualization. This works around the UART problem and also allows using 64-bit
  guests, but requires a kernel module parameter.

- Add an option to run 64-bit guests. Tested that the tests pass with that. As mentioned
  this requires KVM nested virtualization.
@ambrop72
Copy link
Contributor Author

ambrop72 commented May 9, 2019

Filed the VirtualBux bug for the guru meditation: https://www.virtualbox.org/ticket/18632
Basically, it is an issue specific to using software virtualization AND enabling a serial port. Reproducing it was tricky because on common configurations (i.e. modern 32-bit kernels with modern CPUs and VBox software virtualization), guests kernel-panic before the point where the guru meditation occur. That can be worked around by setting the --cpu-profile option on the VM (see ticket), which is unnecessary for NixOS tests because the KVM VM already presents an older CPU.

I pushed all the changes as a single commit. The tests now run with SW virtualization as before but disable the virtual UART to work around the guru meditation (this suppresses all console output from guests). There are now flags to enable nested KVM, to turn the UART back on and to run 64-bit guests (the latter two require nested KVM). 64-bit guests in fact work (all that is different is the machine system and the virtualbox ostype option).

Let's get this merged now.

@flokli
Copy link
Contributor

flokli commented May 10, 2019

Manually ran unfree tests too, and spinned up a real VM, looks good :-)

Let's merge this 🚀!

Again, thanks for all the great work done here!

@flokli flokli merged commit b93d347 into NixOS:master May 10, 2019
@ambrop72
Copy link
Contributor Author

ambrop72 commented May 22, 2019

The VirtualBox serial port bug will be fixed in the next maintenance release! https://www.virtualbox.org/ticket/18632#comment:1

@flokli
Copy link
Contributor

flokli commented May 22, 2019

awesome! I also see some of your patches being mainlined: flokli/virtualbox@e9a91b9 :-)

@flokli flokli mentioned this pull request Aug 10, 2019
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants