Earlier this week I found out the new Intel NUC "Skull Canyon" (NUC6i7KYK) has been released and have been shipping for a couple of weeks now. Although this platform is mainly targeted at gaming enthusiast, there have also been a lot of anticipation from the VMware community on leveraging the NUC for a vSphere based home lab. Similiar to the 6th Gen Intel NUC system which is a great platform to run vSphere as well as VSAN, the new NUC includes a several new enhancements beyond the new aesthetics. In addition to the Core i7 CPU, it also includes a dual M.2 slots (no SATA support), Thunderbolt 3 and most importantly, an Intel Iris Pro GPU a Thunderbolt 3 Controller. I will get to why this is important ...
intel_nuc_skull_canyon_1
UPDATE (05/26/16) - With some further investigation from folks like Erik and Florian, it turns out the *only* device that needs to be disabled for ESXi to successfully boot and install is the Thunderbolt Controller. Once ESXi has been installed, you can re-enable the Thunderbolt Controller and Florian has also written a nice blog post here which has instructions as well as screenshots for those not familiar with the Intel NUC BIOs.

UPDATE (05/23/16) - Shortly after sharing this article internally, Jason Joy, a VMware employee shared the great news that he has figured out how to get ESXi to properly boot and install. Jason found that by disabling unnecessary hardware devices like the Consumer IR/etc in the BIOS, it allowed the ESXi installer to properly boot up. Jason was going to dig a bit further to see if he can identify the minimal list of devices that needed to be disabled to boot ESXi. In the meantime, community blogger Erik Bussink has shared the list of settings he has applied to his Skull Canyon to successfully boot and install latest ESXi 6.0 Update 2 based on the feedback from Jason. Huge thanks to Jason for quickly identifying the workaround and sharing it with the VMware community and thanks to Erik for publishing his list. For all those that were considering the new Intel NUC Skull Canyon for a vSphere-based home lab, you can now get your ordering on! 😀

Below is an except from his blog post Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0 on the settings he has disabled:

BIOS\Devices\USB

  • disabled - USB Legacy (Default: On)
  • disabled - Portable Device Charging Mode (Default: Charging Only)
  • not change - USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled - Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled - HDD Activity LED (Default: On)
  • disabled - M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory - 64GB (Default)
  • IGD Aperture Size - 256 (Default)
  • IGD Primary Video Port - Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled - Audio (Default: On)
  • LAN (Default)
  • disabled - Thunderbolt Controller (Default is Enabled)
  • disabled - WLAN (Default: On)
  • disabled - Bluetooth (Default: On)
  • Near Field Communication - Disabled (Default is Disabled)
  • SD Card - Read/Write (Default was Read)
  • Legacy Device Configuration
  • disabled - Enhanced Consumer IR (Default: On)
  • disabled - High Precision Event Timers (Default: On)
  • disabled - Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 - Enabled
  • M.2 Slot 2 - Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan HEader
  • Fan Control Mode : Cool (I toyed with Full fan, but it does make a lot of noise)

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)
  • Secondary Power Settings
  • disabled - Intel Ready Mode Technology (Default: On)
  • disabled - Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Over the weekend, I had received several emails from folks including Olli from the nucblog.net (highly recommend a follow if you do not), Florian from virten.net (another awesome blog which I follow & recommend) and few others who have gotten their hands on the "Skull Canyon" system. They had all tried to install the latest release of ESXi 6.0 Update 2 including earlier versions but all ran into a problem while booting up the ESXi installer.

The following error message was encountered:

Error loading /tools.t00
Compressed MD5: 39916ab4eb3b835daec309b235fcbc3b
Decompressed MD5: 000000000000000000000000000000
Fatal error: 10 (Out of resources)

intel_nuc_skull_canyon_2
Raymond Huh was the first individual who had reach out to me regarding this issue and then shortly after, I started to get the same confirmations from others as well. Raymond's suspicion was that this was related to the amount of Memory-Mapped I/O resources being consumed by the Intel Iris Pro GPU and does not leave enough resources for the ESXi installer to boot up. Even a quick Google search on this particular error message leads to several solutions here and here where the recommendation was to either disable or reduce the amount of memory for MMIO within the system BIOS.

Unfortunately, it does not look like the Intel NUC BIOS provides any options of disabling or modifying the MMIO settings after Raymond had looked which including tweaking some of the video settings. He currently has a support case filed with Intel to see if there is another option. In the mean time, I had also reached out to some folks internally to see if they had any thoughts and they too came to the same conclusion that without being able to modify or disable MMIO, there is not much more that can be done. There may be a chance that I might be able to get access to a unit from another VMware employee and perhaps we can see if there is any workaround from our side, but there are no guarantees, especially as this is not an officially supported platform for ESXi. I want to thank Raymond, Olli & Florian for going through the early testing and sharing their findings thus far. I know many folks are anxiously waiting and I know they really appreciate it!

For now, if you are considering purchasing or have purchased the latest Intel NUC Skull Canyon with the intention to run ESXi, I would recommend holding off or not opening up the system. I will provide any new updates as they become available. I am still hopeful  that we will find a solution for the VMware community, so crossing fingers.

43 thoughts on “ESXi on the new Intel NUC Skull Canyon

  1. Would it be possible to disable the tools VIB on the installer? I’ve always wondered why it loads, as it shouldn’t be needed, right? Granted, even if you could disable it, I’m guessing it would fail on something else.

  2. I’m having problems with trying to set the virtual switch to MTU 9000 with the physical on-board NIC using the e1000e driver. I tried to do this from EHC and got an error message. I then tried to do the same from esxicli, and got this error:

    esxcli network vswitch standard set -m 9000 -v vSwitch0
    Unable to set MTU to 9000 the following uplinks refused the MTU setting: vmnic0

    • I was able to set Jumbo Frames (MTU 9000) on vmnic1 (Apple Thunderbolt to GigabitEthernet adapter) seen by ESXi 6.0 U2 running on the Skull Canyon NUC as “Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet”. Because I will use this NIC for iSCSI traffic to the iSCSI SAN, my iSCSI traffic will use Jumbo Frames. I still cannot enable Jumbo Frames (MTU 9000) on the built-in Intel I219-LM adapter, but as long as I can use the Apple adapter for iSCSI traffic, I don’t need Jumbo Frames on the Intel I219-LM adapter.

  3. I found a place to configure the iSCSI initiator in Skull Canyon BIOS version 0034. It’s located in Devices > Add-In Config > iSCSI Configuration. I configured the iSCSI settings, but the NUC is not connecting to the iSCSI target upon the boot, and the ESXi installer is not seeing the iSCSI target. Instead of trying to boot with iSCSI it appears the NUC is trying PXE. There’s no way to disable the PXE boot; instead, in the BIOS, the Network Boot menu shows “UEFI PXE & iSCSI” (Boot > Boot Configuration > Network Boot). Do I have to use the Intel Ethernet Flash Firmware Utility to flash the iSCSI boot option to the Ethernet controller? If someone has any idea how to proceed from here, I would greatly appreciate any help. I would prefer to boot my ESXi hosts from the iSCSI target instead of booting them from USB Flash drives.

    Thank you.

    • I got the iSCSI Boot working with the Skull Canyon NUC for ESXi 5.5 U3b and ESXi 6.0 U2. It turned out that not only the use of the Intel Ethernet Flash Firmware Utility (BootUtil) not required for ESXi boot of the Skull Canyon NUC, but also this utility cannot flash the Skull Canyon’s NUC on-board NIC (I219-LM). The only thing this utility is capable of doing vis-a-vis the Intel I219-LM adapter is to interrogate it and report that no flash ROM is installed on this adapter.

      After I spent some quality time with the Intel Ethernet Flash Firmware Utility documentation, I found out that flashing the ROM of certain Intel NICs to enable network boot (PXE, iSCSI, etc.) is only needed for the Legacy BIOS but not for the UEFI BIOS.

      Side Note: The information published by Intel on using network boot with their NICs is so obscure that it took me a while to realize that there is no need to flash the ROM of a NIC to enable network boot on it if UEFI BIOS is used. Flashing NIC’s ROM is only required when using Legacy Boot. Intel seems to have something they call “Monolithic Option ROM” and something they call “Combo Option ROM”. With Monolithic Option ROM, only one type of network boot can be performed, and the respective Option ROM image must be flashed to the NIC, whereas with Combo Option ROM, one can choose from several network boot options (PXE, iSCSI, etc.). Intel provides a version of the Combo Option ROM with its Ethernet Flash Firmware Utility, but again, the Intel I219-LM NIC cannot be flashed with this utility, and neither does it need to be due to the Skull Canyon NUC using EUFI BIOS (not Legacy BIOS).

      Now on to how I got the iSCSI boot to work:

      1. Power up the Intel Skull Canyon NUC and during POST, press F2 to enter BIOS.

      2. In BIOS, navigate to Devices > Add-In Config > iSCSI Configuration and configure the iSCSI initiator name as well as an iSCSI Attempt for the iSCSI LUN dedicated to the ESXi image on your iSCSI SAN.
      Note: You have had to create an iSCSI LUN for ESXi image on your iSCSI SAN prior to this step.

      3. Move the cursor down to the Save Changes option of the iSCSI Configuration screen because the F10 option doesn’t seem to save any changes in that screen even though F10 is listed at the bottom of the screen.

      4. Exit back to BIOS and navigate to Advanced > Boot > Boot Configuration. Enable “Internal UEFI Shell” and press F10 to Save and Exit. The NUC will reboot.

      5. Press F2 during POST to enter BIOS again.

      6. In BIOS, navigate to Advanced > Boot > Boot Priority. Drag “UEFI:Built-in EFI Shell” to the very top of the list of boot devices in order to make it the primary boot option.

      7. In BIOS, navigate to Advanced > Boot > Boot Configuration and clear the checkbox next to “Boot USB Devices First”

      8. In BIOS, navigate to Advanced > Onboard Devices and disable Thunderbolt Controller.
      Note: This is a temporary measure to allow the ESXi installer to succeed. The Thunderbolt Controller can be re-enabled in BIOS once ESXi is installed.

      9. Insert your USB flash drive with the ESXi installer into one of the Skull Canyon NUC’s USB ports.
      Note: You can use the Unetbootin application (available for Windows, OS X, and Linux) or the Rufus application (Windows only, but much faster than Unetbootin) to create a bootable ESXi installer USB flash drive.

      10. Press F10 to Save and Exit. The NUC will reboot.

      11. After POST is completed, the Skull Canyon NUC should boot into EFI Shell. At the top of the EFI Shell screen, you will see all the storage devices enumerated by EFI Shell as follows:
      a. The USB flash drive will be enumerated as fs0 and blk0 – those are two aliases of the same USB flash drive.
      b. iSCSI LUN will be enumerated as blk1

      12. Mount the iSCSI LUN, using the following command:
      mount blk1

      13. Mount the USB flash drive that contains the ESXi installer, using one of the two following commands:
      mount fs0
      mount blk0

      14. Manually launch the ESXi installer from the USB flash drive:
      a. Switch to USB flash drive using the following command:
      fs0:
      b. Change directory to /efi/boot by using the following command:
      cd /efi/boot
      c. Execute the ESXi installer script by using the following command:
      bootx64.efi

      15. Install ESXi
      a. Let ESXi installer load its Welcome to the VMware ESXi screen, where you are prompted to either Continue or Cancel. Press Enter to continue.
      b. On the End User License Agreement screen, press F11 to Accept and Continue. The ESXi installer will scan available devices and will display
      — The USB flash drive that you used to boot the ESXi installer (this drive can be used for installing the ESXi image because by this point, the ESXi installer has fully loaded to the NUC’s RAM).
      — The network boot drive whose UDisk type is listed as “iSCSI Storage”. This is the iSCSI LUN that you mounted from EFI Shell.
      c. Use the down arrow key on the keyboard to select the iSCSI LUN where you want to install the ESXi image.
      d. Complete the ESXi installation as usual.

      16. At the end of ESXi installation, you will be prompted to reboot. Once you confirm the reboot, the NUC will reboot.

      17. As the NUC reboots, press F2 during POST to enter BIOS and navigate to Advanced > Boot > Boot Configuration > Boot Priority. Make sure that the following boot devices are listed in the following order in the Boot Drive Order section lists the following devices in this order:
      UEFI : LAN : SCSI Disk Device : Part 0 : OS Bootloader
      UEFI : LAN : SCSI Disk Device : Part 1 : OS Bootloader
      UEFI : LAN : SCSI Disk Device : Part 4 : OS Bootloader
      UEFI : LAN : SCSI Disk Device : Part 5 : OS Bootloader
      UEFI : LAN : SCSI Disk Device : Part 7 : OS Bootloader
      UEFI : Built-in EFI Shell
      UEFI : LAN : IP4 Intel(R) Ethernet Connection (H) I219-LM (this is PXE boot via IPv4)
      UEFI : LAN : IP6 Intel(R) Ethernet Connection (H) I219-LM (this is PXE boot via IPv6)
      UEFI: USB:
      Note: If you want to make a bootable USB device to have priority over iSCSI Boot, navigate to Advanced > Boot > Boot Configuration and select the checkbox next to “Boot USB Devices First”.

      18. Navigate to Advanced > Onboard Devices and re-enable the Thunderbolt Controller.

      19. Press F10 to Save and Exit. The NUC will reboot.

      20. As NUC is rebooting, remove the USB drive that you used for installing ESXi.

      21. Once NUC completes POST, it should start booting ESXi from the iSCSI LUN.

      22. Proceed to configure ESXi as you normally would.

      23. If you want to be able to boot another version of ESXi from an iSCSI LUN, in BIOS, navigate to Devices > Add-In Config > iSCSI Configuration and configure another iSCSI Attempt for the iSCSI LUN dedicated to another ESXi image on your iSCSI SAN.
      Note: Each iSCSI LUN can hold only one ESXi image. You can create multiple iSCSI LUNs to hold different versions of ESXi images or ESXi images for different ESXi hosts under the same iSCSI target created specifically for booting OSes from iSCSI LUNs. You can give your iSCSI Boot LUNs descriptive names that reflect the image and the host for which this LUN will be used.

      24. Move the cursor down to the Save Changes option of the iSCSI Configuration screen to save the newly created iSCSI Attempt. This will be Attempt #2.

      25. In the same iSCSI Configuration Utility screen, highlight iSCSI Attempt #1, disable it, and move the cursor down to the Save Changes option.

      26. Repeat the above procedure starting with Step 4 and ending with Step 22.

      27. In order to switch between ESXi installations, enter BIOS and navigate to Devices > Add-In Config > iSCSI Configuration.
      a. Highlight the the iSCSI Attempt that points to the LUN containing the ESXi installation you want the Skull Canyon NUC to boot from and enable this iSCSI Attempt.
      b. Highlight every other iSCSI Attempt and disable each of them one by one.
      c. Move the cursor down to the Save Changes option of the iSCSI Configuration screen to commit the changes.
      d. Exit to BIOS, then press F10 to Save and Reboot.
      e. Skull Canyon NUC will boot to the selected iSCSI Attempt which contains the ESXi installation you want to boot from.

      28. You can utilize your iSCSI SAN’s snapshots capability for LUNs to make a snapshot of an ESXi image before making any significant changes to the configuration, installing or uninstalling VIBs, or upgrading ESXi to another version. This snapshot will help you roll back the changes with ease if something goes wrong.

      • One more thing: If you have a secondary NIC connected to the Skull Canyon NUC – be it a USB3-based NIC or a Thunderbotl-based NIC – you can select the MAC address of the secondary NIC in the iSCSI Configuration utility screen and perform the configuration of the initiator and the iSCSI Attempt via the secondary NIC. I’m still debating whether I should boot ESXi from an iSCSI target using the on-board Intel I219-LM NIC or using the secondary (Apple Thunderbolt to GigabitEthernet Adapter) NIC. Even though I want to contain my iSCSI storage traffic to the secondary NIC, if I boot ESXi via the secondary NIC, I would have to reconfigure iSCSI initiator in the BIOS iSCSI Configuration utility in order to boot ESXi if the Skull Canyon NUC fails to detect the Apple Thunderbolt to GigabitEthernet Adapter during a reboot. Therefore, I think I will keep booting ESXi with iSCSI boot using the Skull Canyon NUC’s on-board Intel I219-LM NIC.

      • I’ve discovered some caveats with the Skull Canyon NUC’s iSCSI boot. If you are planning to configure the iSCSI boot to use the primary on-board Intel NIC (I219-LM), and you are also using this NIC in ESXi for the Guest VM traffic with different VMs using different VLAN tags, the switch port will have to be configured with the 802.1Q VLAN tagging (which Cisco calls a trunk port). With Cisco switches, the 802.1Q trunk port must be configured with the following command: “spanning-tree portfast edge trunk”.

        Additionally, the iSCSI attempts in the iSCSI Configuration utility in the Skull Canyon NUC’s BIOS must be configured with the following settings:
        — Connection Retry Count [1] (default 0)
        — Connection Establishing Timeout 10000 (default 1000)

        In my testing, if I do not change the “Connection Establishing Timeout” from the default of 1 second to 10 seconds, and also if I do not have the “spanning-tree portfast edge trunk” configured for faster convergence on the STP “forwarding” state, the Skull Canyon NUC fails to connect to the iSCSI target upon the attempted iSCSI boot, and the iSCSI boot fails.

        —–
        One more caveat: If the previously configured iSCSI boot attempt fails at a later time (for whatever reason – lack of network connectivity, wrong switch port configuration, etc.), once the issue has been corrected, the iSCSI boot will continue to fail because after an unsuccessful iSCSI boot attempt, the Skull Canyon NUC’s BIOS (at least version 0035), changes the boot order with the iSCSI LUNs dropping to the bottom of the boot order list. Therefore, once you remediate the iSCSI boot issue (due to network connectivity), you must then enter the BIOS and change the boot order so that the iSCSI LUNs are at the top of the boot order list.

  4. Has anyone tried leveraging the Thunderbolt (USB-C) port on the Skull Canyon to get more than one (say four) Network card(s)?

    Would be interesting to know if:

    1. It works
    2. What the performance is like (as I think the CPU needs to crunch this and it might affect VM’s)

    • I’ve just received a newly released Startech Thunderbolt 3 to Thunderbolt adapter and tried to use it with the Apple Thunderbolt to GigabitEthernet adapter to get a secondary NIC on the Skull Canyon NUC. I was not able to get the NUC to detect the secondary NIC.

      This Apple Thunderbolt to Ethernet adapter works fine with the late 2012 Mac Mini under ESXi 5.x, including 5.5 with William’s vghetto-apple-thunderbolder-ethernet.vib driver. According to William, ESXi 6.0 should work with this adapter without any need to install this driver.

      The Thunderbolt controller is enabled in the Skull Canyon NUC’s BIOS. I rebooted the NUC but still the Apple Thunderbolt to GigabitEthernet adapter was not detected by ESXi 6.0 U2. Then, I installed William’s vghetto-apple-thunderbolder-ethernet.vib in ESXi 6.0 U2 and rebooted the NUC again, but still, the secondary NIC provided by the Apple GigabitEthernet adapter was not detected.

      Does anyone have any ideas why this is not working? I was hoping that the Startech Thunderbolt 3 to Thunderbolt adapter would be transparent, and the Apple Thunderbolt to GigabitEthernet adapter would be visible to ESXi, but apparently this is not the case.

      Thank you.

      • Sorry for posting again, but it wouldn’t even need to be the USB-C adapter… You could simply use the USB 3.0 Ethernet adapters that cost as low as £15/$15 each, and leave the TB port free for something else.

        • USB Gigabit Ethernet adapters require a hack that William and his colleagues created – there are two articles that William published on this. The downside is that the upstream throughput is pretty erratic and bottlenecks at a fraction of 1 Gbps.

          The Apple Thunderbolt to Ethernet adapter has been a go-to secondary NIC for Mac Minis for 4 years now. This adapter provides full bandwidth and uses a stock ESXi network driver.

          I need a secondary NUC to be used for iSCSI traffic, so the USB 3 network adapter is not a good option for me.

          What’s unclear is why the Apple adapter is not recognized by ESXi 6.0 U2 with the Startech Thunderbolt 3 to Thunderbolt adapter.

      • That is not quite right that the USB adapters have lower throughput. I know about William’s articles related to the ASIX drivers, and I have also managed to compile drivers for Realtek based adapters. The adapters work pretty well and have comparable speed to that of the built-in Intel adapters on my NUC.

        I have also written a post describing how the adapters work, with rPerf figures. I am not going to spam William’s blog with a link, but if you want to have a look at the article you can contact me on gomesjj at devtty.co.uk — or just hover over my name to see the URL. 😉

        • Jose, I’ve read the posts on your blog. I’ve also read reviews for all of the USB3 to GigabitEthernet adapters that you listed as compatible with the two drivers that you compiled for ESXi 5.1, 5.5, and 6.0.

          However, a percentage of reviews for each of these USB3 to GigabitEthernet adapters mentions random NIC disconnects, which require an hot unplug and hot re-plug of the USB3 to GigabitEthernet adapter. I am planning to use a secondary NIC on the Skull Canyon NUC for iSCSI traffic, and I cannot afford random disconnects on that NIC. I’ve used the Apple Thunderbolt to GigabitEthernet adapter with Mac Minis for almost 4 years now, so I know those are rock-solid NICs. I am trying to get to the bottom of why ESXi cannot detect the Apple Thunderbolt to GigabitEthernet adapter when it’s plugged into a Thunderbolt 3 port (via the Startech Thunderbolt 3 to Thunderbolt Adapter). I mentioned in another post here the success I had with Windows 10 running on the Skull Canyon NUC detecting the Apple Thunderbolt to GigabitEthernet adapter (with an Intel Thunderbolt driver for Windows). If you have any ideas as to how to get ESXi to detect in ESXi the Apple Thunderbolt to GigabitEthernet adapter connected to the Skull Canyon NUC via the Startech Thunderbolt 3 to Thunderbolt adapter, please respond here.

          Thank you.

          • That is fair enough and I am happy you resolved the issue (seen your other posts). In my experience the USB adapters have been rock solid — have been using them for ~3 months now without an issue.

            By the way, I also have a LandingZone Thunderbolt dock that I use with my Macbook Pro. This dock has a USB ethernet adapter based on the ASIX chipset, which again, has been extremely reliable over the last two years.

            Anyway, like I said before, I am glad you got the Thunderbolt adapter working.

      • I’ve had partial success with being able to detect and bring up ***In Windows 10*** the Apple Thunderbolt to Gigabit Ethernet adapter connected to the Skull Canyon NUC via the Startech Thunderbolt 3 to Thunderbolt Adapter. I had to download and install the Intel Thunderbolt driver for Windows (7, 8, 8.1, and 10) that Intel released specifically for the Skull Canyon NUC. It took a couple of reboots and a hot-plug for Windows 10 to detect the Apple Thunderbolt to Gigabit Ethernet Adapter. Eventually, Windows 10 detected (with a hot plug) that a Thunderbolt device was plugged in, properly identified the name of this Apple Thunderbolt to Gigabit Ethernet adapter, informed me that it was not supported in Windows, and once I acknowledged that and permitted this device to be connected, Windows 10 happily and automatically installed the driver for the Broadcom chip used in this Apple adapter. The Apple Thunderbolt to GigabitEthernet adapter is now operational in Windows. Before I installed the Intel Thunderbolt driver, Windows was not able to detect the Apple Thunderbolt to GigabitEthernet adapter no matter what I tried. Now, the Apple Thunderbolt to GigabitEthernet adapter shows up in the Windows Device Manager, and survives reboots as well as cold boots.

        I was hoping that the Intel Thunderbolt driver for Windows perhaps flashed the Thunderbolt controller in the Skull Canyon NUC with some firmware update that allowed the NUC to detect Thunderbolt 1 devices plugged in the Startech Thunderbolt 3 to Thunderbolt adapter. However, when I booted the NUC with ESXi 6.0 U2, I could not get the Apple Thunderbolt to GigabitEthernet adapter to appear using the “esxcli network list” command or the lspci command. So, not only does the ESXi fail to enable this device, but it also doesn’t seem to be able enumerate. This is exactly the same issue that I experienced with Ubuntu 16.04 running on the Skull Canyon NUC (with Ubuntu not being able to enumerate the Apple Thunderbolt to GigabitEthernet adapter). This is also the same exact issue I had in Windows 10 ***before*** I installed the Intel Thunderbolt drivers in Windows.

        It’s unclear to me why one needs a driver for Thunderbolt 3 to function in any OS because my understanding so far has been (at least with Thunderbolt and Thunderbolt 2) that Thunderbolt is just an external PCI bus. So, the purpose of the Intel Thunderbolt driver for Windows escapes me, but it appears that such a driver may be required for an OS. Perhaps the Thunderbolt driver allows the NUC to distinguish between the USB 3.1 device and the Thunderbolt device plugged in the same USB-C port – I don’t know.

        If someone has any ideas, please let me know by replying here. At least I’ve proven now that Windows 10 running on the Skull Canyon NUC can detect and enable the Apple Thunderbolt to GigabitEthernet adapter via the Startech Thunderbolt 3 to Thunderbolt adapter. The challenge now is to get this to work under ESXi.

        Thank you.

        • Success!!!

          esxcli network nic list
          Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
          —— ———— —— ———— ———– —– —— —————– —- ——————————————————–
          vmnic0 0000:00:1f.6 e1000e Up Up 1000 Full 00:1f:c6:XX:XX:XX 1500 Intel Corporation Ethernet Connection (2) I219-LM
          vmnic1 0000:09:00.0 tg3 Up Up 1000 Full 98:5a:eb:XX:XX:XX 1500 Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet

          ——————–

          lspci
          0000:00:00.0 Bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers
          0000:00:02.0 Display controller: Intel Corporation Sky Lake Integrated Graphics
          0000:00:08.0 Generic system peripheral: Intel Corporation Sky Lake Gaussian Mixture Model
          0000:00:14.0 Serial bus controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller
          0000:00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem
          0000:00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1
          0000:00:1c.0 Bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #1 [PCIe RP[0000:00:1c.0]]
          0000:00:1c.1 Bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #2 [PCIe RP[0000:00:1c.1]]
          0000:00:1c.2 Bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #3 [PCIe RP[0000:00:1c.2]]
          0000:00:1c.4 Bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #5 [PCIe RP[0000:00:1c.4]]
          0000:00:1f.0 Bridge: Intel Corporation Sunrise Point-H LPC Controller
          0000:00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC
          0000:00:1f.3 Multimedia controller: Intel Corporation Sunrise Point-H HD Audio
          0000:00:1f.4 Serial bus controller: Intel Corporation Sunrise Point-H SMBus
          0000:00:1f.6 Network controller: Intel Corporation Ethernet Connection (2) I219-LM [vmnic0]
          0000:02:00.0 Generic system peripheral:
          0000:03:00.0 Network controller: Intel Corporation Wireless 8260
          0000:04:00.0 Bridge:
          0000:05:00.0 Bridge:
          0000:05:01.0 Bridge:
          0000:05:02.0 Bridge:
          0000:06:00.0 Generic system peripheral:
          0000:07:00.0 Bridge: Intel Corporation DSL3510 Thunderbolt Controller [Cactus Ridge]
          0000:08:00.0 Bridge: Intel Corporation DSL3510 Thunderbolt Controller [Cactus Ridge]
          0000:09:00.0 Network controller: Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet [vmnic1]

          ——————-
          It appears that in order to get ESXi (6.0 U2) to detect the Apple Thunderbolt to GigabitEthernet Adapter – at least for the first time (seen by ESXi as “Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet”), a hot plug of the Apple Thunderbolt to GigabitEthernet adapter to the Startech Thunderbolt 3 to Thunderbolt adapter is required – once ESXi is fully loaded, followed by a reboot. It doesn’t seem like a cold plug followed by a cold boot results in ESXi being able to enumerate this Apple Thunderbolt to Gigabit Ethernet adapter plugged (via the Startech Thunderbolt 3 to Thunderbolt adapter) in the Skull Canyon NUC’s Thunderbolt 3 port.
          ——————

          One of the signs that the adapter will be enumerated and seen by the OS (both with Windows 10 and with ESXi 6.0 U2) is that there’s a message in the upper right corner of the POST screen (during the boot) that the Broadcom adapter is detected via PCI-E. When the OS can’t enumerate this adapter, this message is absent from the upper right corner of the POST screen during the boot.

          ——————
          The detection of the Apple Thunderbolt to Gigabit Ethernet adapter by ESXi survived a warm reboot (using either the EHC GUI or the “reboot” command in ESXi CLI). However, it didn’t survive the shutdown of ESXi (either using the EHC GUI or the “halt” command in ESXi CLI). Neither did it survive a complete power disconnect of the power adapter from the NUC. Once I shut down ESXi (or later disconnected the power adapter from the NUC), and then powered the NUC back up, the message about the Broadcom adapter being detected as PCI-E was absent from the upper left corner of the POST screen, and once ESXi loaded, there was only one NIC present (the built-in I219-LM Intel network adapter).

          Interestingly enough, a simple reboot of ESXi (from ESXi shell) resulted in the Apple Thunderbolt to GigabitEthernet adapter to be detected during the NUC’s POST, and Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet showed up in ESXi.

          ——————
          It appears that the Apple Thunderbolt to GigabitEthernet adapter is not detected after a *cold* boot ONLY if I power down the NUC using the “halt” ESXi CLI command or shut down ESXi from EHC without first putting ESXi in Maintenance. Mode. If I put ESXi in Maintenance Mode and then shut it down from EHC, the Apple Thunderbolt to GigabitEthernet adapter is detected upon a *cold* boot.

          • Very interesting findings Telecastle! Really appreciate you sharing with everyone and some interesting behaviors indeed. At least we now know it officially works 🙂

        • Here’s the latest on the detection of the Apple Thunderbolt to GigabitEthernet adapter by Skull Canyon NUC. In fact, this may be the case with other Thunderbolt devices connected to Skull Canyon NUC, but I don’t have any other Thunderbolt devices to test besides the Apple Thunderbolt to GigabitEthernet adapters:

          1. Cold Boot (power disconnected from the NUC):
          When power is applied to the NUC, the NUC automatically powers on but fails to discover the connected Thunderbolt adapter. It appears that in order to get the NUC to detect the connected Thunderbolt to GigabitEthernet adapter from a cold boot, you have to immediately turn the NUC off with the power button. Wait for about 5 seconds (make it 10 seconds to be safe) and then use the power button to turn on the NUC again. The NUC will now detect the Thunderbolt to Gigabit adapter, and you will see a message to that effect displayed in the upper left corner of the NUC’s POST screen. This seems to be caused by the failure of the NUC to supply power to the Thunderbolt adapter after a cold boot. Once the power button is pressed to turn the NUC off the NUC seems to apply power to the Thunderbolt adapter within a few seconds, which is evident from the link LED on the switch port turning on exactly at this point.

          2. Hot Unplug: If the Apple Thunderbolt to GigabitEthernet adapter is unplugged while the NUC is running, the hot plug does’t result in the Thunderbolt Gigabit to Ethernet adapter being detected. For the Thunderbolt adapter to be detected again, a warm-boot scenario must be invoked. In ESXi this can be achieved by rebooting the NUC. Alternatively, you can power off the NUC with the power button and power it back on.

          3. Hot Plug: If the Apple Thunderbolt to GigabitEthernet adapter is plugged in while the NUC is running, the Thunderbolt adapter will not be detected. For the Thunderbolt adapter to be detected, a warm-boot scenario must be invoked. In ESXi this can be achieved by rebooting the NUC. Alternatively, you can power off the NUC with the power button and power it back on.

          4. The following are BIOS Power settings that affect the detection of the Apple Thunderbolt to GigabitEthernet adapter in Skull Canyon NUC:

          In Skull Canyon NUC BIOS, Navigate to Advanced > Power

          Deep S4/S5: With this setting enabled, when NUC is shut down from ESXi, it loses the Thunderbolt adapter upon the subsequent boot. Therefore, this setting should be disabled if you want to be able to shut down ESXi and then power up the NUC without losing the Apple Thunderbolt to Gigabit Ethernet Adapter. If you keep the Deep S4/S5 setting enabled, you will have to follow the same procedure to get the NUC to detect the Thunderbolt adapter as in the cold-boot scenario (described above).

          Native ACP OS PCIe Support: This setting can be enabled. When ESXi is shut down with this setting enabled, the NUC powers off and the fan no longer runs. The NUC can be turned on with a power button, and the Thunderbolt adapter is detected properly.

          PCIe ASPM Support – should be disabled. When this mode is enabled, shutting down NUC from ESXi is inconsistent. Sometimes it works fine, and other times the NUC’s power button LED turns off but the NUC continues to spin its fan, and the connected display continues to show the ESXi screen image.

  5. Look at this Gigabyte Intel 10Gb to TB3 adapter….this looks prommising!

    GP-T3X550-AT2
    http://www.hardwareluxx.com/index.php/news/hardware/vgacards/39341-gigabyte-shows-external-gpu-solution-with-thunderbolt-3.html
    “A smaller box is Gigabyte’s GP-T3X550-AT2, which comes with a 10 Gbit/s network card installed with two RJ-45 connectors. An Intel X550-AT2 is used as a controller. As a USB 3.1 Gen2 hub Gigabyte showed the GP T3U3.1×8, which offers a total of eight very fast USB 3.1 Gen2 ports, four Type-A and four Type-C.”

    I got my Skull box yesterday and i am testing it with ESX i right now….with the above mentioned adapter i can get 10Gb speeds 🙂

    • This is very cool to hear, thanks for sharing! Do you have a link to where this unit can be purchased and the price? I think other readers would definitely be interested in this. Any screenshots you can post of ESXi Embedded Host Client w/10Gbe speeds 🙂

    • Since this device has just been announced at Computex, I don’t believe it is available to buy anywhere yet. Did you mean to say that you “could” get 10Gb speeds?

      • Hi!, sorry for replying this late…i was busy 😉 Yes, it is just anounced, so not available yet sadly.

  6. The follow-up on my fan noise comment above:

    I’ve modified default settings in the Skull Canyon NUC’s BIOS (Cooling tab) as follows:
    Minimum Duty Cycle: 15%
    Minimum Temperature: 76C
    Duty Cycle Increment: 5%

    I’ve also lowered PL1/PL2 levels by setting the “Processor Power Efficiency Policy” in the Power screen of the Intel Visual BIOS to “Low Power”. I believe, this lowers the PL1/PL2 levels to 35Watt/45Watt.

    The above settings resulted in the Skull Canyon NUC being able to run 12 VMs (Cisco UC server VMs) with a minimum fan noise. The CPU utilization stays around 30%, and even though I can detect the fan noise from about 1 foot away, I cannot hear it from 2 feet away. NUC has been running 12 VMs for a few days now, and the fan noise is now at the level that no longer bothers me.

    • Andy, I read that post on your blog, but I didn’t quite understand how you set up an MTU of higher than 1500 on the Skull Canyon NUC’s on-board NIC (Intel I219-LM). It appears from your post you were able to set the vmk’s MTU to 1600 and then to 9000, which is the MTU used by the VMware’s virtual NIC. However, without being able to set the MTU of vSwitch to the same (or higher) MTU value than the MTU of the vmk, the VMware vSwitch most likely fragments the Jumbo Frames that the vmk generates in order to fit their size to the default MTU of vSwitch set to 1500.

      Please clarify if I’m mistaken here.

      • Just installed and booted up and been messing around with this on my new NUC Skull Canyon and the answer appears that jumbo frames are OK just can’t go up to 9000, more specifically 8096 MTU is the limit.

        Playing around with the USB to Ethernet adapter (Orico UTR-U3) the limit seemed to be 4088 MTU.

        [root@killjoy3:~] esxcli network vswitch standard set -m 8996 -v vSwitch0
        [root@killjoy3:~] esxcli network vswitch standard set -m 9000 -v vSwitch0
        Unable to set MTU to 9000 the following uplinks refused the MTU setting: vmnic0
        [root@killjoy3:~] esxcli network vswitch standard set -m 8997 -v vSwitch0
        Unable to set MTU to 8997 the following uplinks refused the MTU setting: vmnic0
        [root@killjoy3:~] esxcli network vswitch standard set -m 8996 -v vSwitch0
        [root@killjoy3:~] esxcli network vswitch standard list
        vSwitch0
        Name: vSwitch0
        Class: etherswitch
        Num Ports: 1792
        Used Ports: 4
        Configured Ports: 128
        MTU: 8996
        CDP Status: listen
        Beacon Enabled: false
        Beacon Interval: 1
        Beacon Threshold: 3
        Beacon Required By:
        Uplinks: vmnic0
        Portgroups: VM Network, Management Network

        vSwitch1
        Name: vSwitch1
        Class: etherswitch
        Num Ports: 1792
        Used Ports: 2
        Configured Ports: 1024
        MTU: 4088
        CDP Status: listen
        Beacon Enabled: false
        Beacon Interval: 1
        Beacon Threshold: 3
        Beacon Required By:
        Uplinks: vusb0
        Portgroups:
        [root@killjoy3:~] lsusb
        Bus 001 Device 004: ID 8087:0a2b Intel Corp.
        Bus 001 Device 003: ID 413c:2003 Dell Computer Corp. Keyboard
        Bus 001 Device 002: ID 18a5:0302 Verbatim, Ltd Flash Drive
        Bus 002 Device 002: ID 0b95:1790 ASIX Electronics Corp. AX88179 Gigabit Ethernet
        Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
        Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
        [root@killjoy3:~] esxcli network vswitch standard set -m 4088 -v vSwitch1
        [root@killjoy3:~] esxcli network vswitch standard set -m 4089 -v vSwitch1
        Unable to set MTU to 4089 the following uplinks refused the MTU setting: vusb0

  7. This is a summary of what works and what doesn’t work in ESXi with the Skull Canyon NUC:
    1. ESXi version: standard images of both ESXi 5.5 U3b and ESXi 6.0 U2 work without any additional VIBs required. Wi-Fi (and probably Bluetooth) do not work in ESXi but could be used for DirectPath I/O pass-through to the VMs that support them.

    2. Primary on-board NIC (Intel I219-LM): I have not being able to configure Jumbo Frames on the vSwitch connected to this NIC via the GUI (EHC and vSphere Client) as well as via ESX CLI. My lab setup is such that I use the primary NIC for the VM traffic and use the secondary NIC for the iSCSI storage traffic to the iSCSI SAN; therefore, Jumbo Frames on the primary on-board NIC are not a requirement in my setup.

    3. Secondary NIC: I’ve had a success with the Apple Thunderbolt to GigabitEthernet adapter connected to the Skull Canyon NUC’s Thunderbolt 3 interface via the Startech Thunderbolt 3 to Thunderbolt adapter to be used as a secondary NIC. There is a caveat that this Thunderbolt to Gigabit adapter is not detected upon a cold boot, but it’s easy to remediate this issue (see my post above). There are also a few Power settings in the Skull Canyon NUCs BIOS that need to be modified for this secondary NIC to work properly in various reboot and power off scenarios. William and others have had success compiling drivers for USB3 to GigabitEthernet adapters, but I decided to stay away from that route and continue to use the Apple Thunderbolt to GigabitEthernet adapter, which has been very solid and dependable in my lab environment with 2012 Mac Minis. In the future, I’m hoping that Apple and other companies will release Thunderbolt 3 to Gigabit Ethernet adapters, so the Startech Thunderbolt 3 to Thunderbolt adapter will no longer be needed.

    4. CPU fan speed: With the default Cooling settings in the Skull Canyon NUC BIOS, the CPU fan is too loud for my taste. However, with a simple modification of the Cooling settings, the Skull Canyon NUC runs 12 VMs in my lab environment with CPU utilization being under 30%, and the CPU fan not being loud enough to be heard from 2 feet away. Once caveat here is that when VMs power up, Skull Canyon NUC’s CPU utilization gets pretty high, so it’s important to space out the startup of each subsequent VM. In my testing with Cisco Unified Communications servers running as VMs (these are Linux-based appliances), it appears that the safe value for “start delay” is around 5 minutes. Same is true if automatic start is configured. For powering down, it appears that the “stop delay” can be kept at the default of 2 minutes. Therefore, if the NUC needs to be power cycled (for whatever reason), it would take the NUC in my environment about 25 minutes to power down and a little over an hour to power up and start all 12 VMs. This is the price to pay for having a small-factor consumer-grade computer being used in a lab environment to run 12 VMs. However, if the NUC is connected to a UPS, and if you are not going to upgrade ESXi on it too often, you will only experience this inconvenience a few times per year. In fact, my Mac Minis have been on for over a year now without a reboot.

    5. iSCSI Boot works well with the Skull Canyon NUC – as long as you follow the direction I posted above. To be sure, configuring iSCSI boot for ESXi is a finicky process, and it requires patience, accuracy, and deliberation in the configuration process. It may also be necessary to change the switch port configuration (especially with 802.1Q VLAN tagging switch ports). Please see my post above for details. The benefits of using iSCSI Boot for ESXi is that you can purchase a Skull Canyon NUC without SSDs, and you do not have to have a USB flash drive being plugged in to the Skull Canyon NUC. This is an ultimate solution for deploying thin ESXi hosts in your lab. I utilize a QNAP TS-563 NAS to host iSCSI LUNs for VMs running on my ESXi hosts, but also now that I have the Skull Canyon NUC (which supports iSCSI boot), I created specific LUNs to host different ESXi images on a QNAP iSCSI target. I can now easily switch between different ESXi versions on the Skull Canyon NUC without having to have a drawer full of USB flash drives for each image.

    Note: In case you are considering an iSCSI NAS vs having local SSDs (or spinning drives) in your ESXi hosts, make sure you get a NAS that supports read/write SSD caching, and get two SSDs in the NAS to be used as read/write SSD cache for all of your iSCSI traffic. In my lab, I use two Samsung 850 Pro 250 GB SSDs in RAID 1 to cache all iSCSI traffic from/to my ESXi hosts (read/write cache). Without having SSD caching enabled for iSCSI traffic, the NAS will quickly become a bottleneck not being able to support iSCSI-based datastore for as many VMs as your ESXi hosts can run (due to a very low IOPS yield that spinning drives in the NAS support).

    6. After two weeks of testing, I’ve arrived at the decision that the Skull Canyon NUC is fit to replace the 2012 quad-core i7 Mac Minis in my lab. In fact, I’ve just decommissioned one 2012 Mac Mini and replaced it with the 2016 Skull Canyon NUC, which can run twice as many VMs as the 2012 Mac Mini can. Unfortunately, the build and the feel of the 2016 Skull Canyon NUC is much inferior to the Mac Mini, so it has been quite hard for me to decide to decommission my Mac Minis. On the other hand, I’m very pleased with the reliability of my lab and the responsiveness of all 12 VMs running on the Skull Canyon NUC. Over the years, I’ve experienced all sorts of issues with VMs crashing in my lab due to low IOPS yields provided by either internal-to-the-hosts HDDs (including hardware RAID) or by several different NAS models that I owned that were used as iSCSI SANs for VM datastores. Additionally, with Mac Mini RAM limitation of 16 GB, I’ve experienced slow response from VMs as well as VM crashes when I tried to run more than 6 VMs per each Mac Mini. With the Skull Canyon NUC and the QNAP TS-563 NAS (with SSD caching for iSCSI traffic), I’ve finally built a lab that is robust and very responsive. The VMs behave in my lab the same way they behave in production environments where they run on much more powerful hardware than Skull Canyon NUC. In the future, I plan to purchase a few more Skull Canyon NUCs for my lab to be able to run concurrently significantly more VMs.

  8. William, first of all, thank you very much for sharing your experience and knowledge with us.
    I would like to know if someone tested the Plugable Flagship Thunderbolt 3 Dock with Thunderbolt 3 port?

  9. This sounds great. I’m wondering if there is any hope for virtualized graphics on this platform. I know the only approved integrated gpu thus far is a Xeon e3 part with iris pro graphics. Wondering if this would be close enough to enable….

  10. With a thunderbolt dock, like from Startech or from Gigabyte .. Will I be able to plug a GPU card into the external box and do a passthough into the exsi ? Like a passive transparent link.. Or a PCIe extender to an external case might be better ? (like Sonnet technology or Netstor NA255A

  11. Hi

    Did anybody try running labs under Windows 7 / 10 and workstation hosting esxi + vcenter. Wouldnt this setup bypass all the boot time hiccups. Please do share if someone has this running.

    Thanks

  12. After seeing all the comments about GPU and 10GBe, I think a warning would be in place. All IO on Skull Canyon goes over the DMI link to the PCH, which equals PCIe 3.0 x4. So the PCIe links from the CPU are not used and there is 3940MB/s to share between three M.2 slots, TB, USB, SATA, LAN and SD. If you look at a single SM961 NVMe SSD, that goes up to 3200MB/s, dual 10GBe on TB goes up to 2200MB/s.

    Don’t get me wrong, 3940MB/s is plenty for a homelab, but GPU+dual NVMe+10GBe probably is a bad idea, you might want to look at an Xeon-D for more IO.

  13. I’m curious anyone was successful in trying the Iris Pro Graphics 580 to a VM. I’ve installed ESXi 6.0u2. The card is being passed through to a Win10 x64 VM, but in Windows says about the this display adapter ‘Windows has stopped this device because it has reported problems. (Code 43).

    I heard that GPU Passtrough is working fine on XenServer, but I’ll like to use ESXi 😉

    • I also have a problem with a shutdown from ESXi. The NUC doesn’t power off after a shutdown. Known issue?

      • After flashing the BIOS to the latest available release (37) and disabling the GPU passthrough from ESXi, the machine is being powered off correctly.
        I think that enabling GPU passthrough leads to problems with a correct shutdown.

  14. Hi are you installing this as a type 2 hypervisor or a type 1? I’m trying to install ESXi as a type 1 hypervisor and wanted to know if there was any difference in the implementation?

Thanks for the comment!