Earlier this week I found out that it was possible to passthrough the Integrated GPU (iGPU) for standard Intel NUC which was motivated by a question I had saw on the VMware Subreddit. I have written about iGPU passthrough for Intel NUCs before but only for the higher end models which were the Hades Canyon NUC at the time.
Neat! Just found out you can actually passthrough the iGPU of standard Intel NUC. The trick looks to be enabling passthrough using ESXi Embedded Host Client UI & then you can assign it using vSphere UI#Homelab pic.twitter.com/NwuxbXwUMj
— William Lam (@lamw) June 15, 2020
To be honest, I never thought about trying this out with a standard NUC as I figured the iGPU may not be powerful enough or warrant any interests. After sharing the news on Twitter, I came to learn from the community that not only is this desirable for various use cases but some folks have also been doing this for some time now and have shared some the benefits it brings for certain types of workloads.
Can’t take credit. It was one of our collegaues that pointet me to it. Hw transcoding went up a factor of almost x 20. So for specefic workloads the nuc is suddently a lot more capable than before.
— Robert Jensen (@rhjensen) June 15, 2020
I’ve been doing this forever, when I need to crack passwords but don’t need the full 7 gpu rig - all Supermicro and 1080ti GPUs these dayshttps://t.co/GJGRV5eu8f
— Rob VandenBrink (@rvandenbrink) June 15, 2020
seems like this would be great for ESXi + Plex hardware transcoding
— Will Beers (@willbeers) June 15, 2020
Below are the instructions I used to enable iGPU passthrough on an Intel NUC 10 (Frost Canyon) with vSphere 7.0. These instructions should also be applicable for other NUC models and earlier versions of vSphere including details around passthrough configuration persistency which I know some folks have ran into which I was able to figure out as part of this experiment.
Step 1 - Enable passthrough of the iGPU. When I had initially attempted this using the vSphere UI within vCenter Server, I was not able to toggle it. I had to login to the ESXi Embedded Host Client.
After it was enabled, I then logged into vCenter Server to enable the iGPU for passthrough as the changes were not picked up. If you are using vSphere 7, you can now take advantage of the new "Hardware Label" feature which is available of the new Assignable Hardware capability.
Step 3 - Create a new VM, I used Windows 10 64-bit for my OS. Ensure that the VM is configured with vSphere 7 Compatibility (aka vHW 17) which is required to use the new Dynamic Direct Path I/O feature. If you are using an older version of vSphere or earlier VM Compatibility, the legacy Direct Path I/O should still work.
In addition, I also added hypervisor.cpuid.v0=FALSE to the VM Advanced Setting, I noticed this is generally recommended when using NVIDIA GPUs and I was not 100% sure whether this is needed or not in this case, but it did not look to hurt to add it.
After Windows was setup, I noticed that it detected the iGPU and automatically installed the drivers along with the Intels Graphics Center tool which was pretty useful.
One issue that I had noticed while looking into iGPU passthrough was that the ESXi passthrough configuration would not persist upon a reboot and folks have simply been dealing with it over the years by manually toggling passthrough. I too ran into this behavior and it certainly was not ideal and I wanted to dig deeper and at least file a bug internally.
After a bit of debugging with one of the Engineers, we found the real root cause and interestingly, it had nothing to do with persistency of the configuration which was being saved properly. The issue is that by default the VMKernel will automatically claim the VGA driver and this becomes an issue as the passthrough configuration is processed much later in the boot process causing the behavior that has been observed.
The good news is that there is an easy workaround that allows us to tell the VMKernel to not claim the VGA driver which is passed as an ESXi kernel setting. I do want to mention one side affect is that you will no longer be able to access the DCUI interface if you are using a monitor to connect to your NUC. Once the VMKernel is starting, you will see a screen like the following as the VGA driver is no longer being claimed.
esxcli system settings kernel set -s vga -v FALSE
You can always re-enable this as long as you have access to ESXi host. At this point, you do not have to reboot the ESXi host but the next time it goes through a reboot the iGPU passthrough settings will now persist.
Lastly, I do want to mention that it is still possible to access the DCUI by using SSH, which may not be a very well known capability. Simply SSH to your ESXi host and run the following two commands which will launch DCUI and is fully functional:
Skull Canyon NUC tested and approved! After the iGPU is assigned to a VM via vSphere UI it seems reboot capable
— Thomas D. (@Oeppelman) June 16, 2020
— vdoppler (@vdoppler) June 15, 2020