• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vSphere 7.0

ESXi 7.0 Update 2 Upgrade Issue – Failed to load crypto64.efi

03/10/2021 by William Lam 25 Comments

I started to notice yesterday that a few folks in the community were running into the following error after upgrading their ESXi hosts to latest 7.0 Update 2 release:

Failed to load crypto64.efi

Fatal error: 15 (Not Found)

Upgrading my #VMware #homelab to #vSphere7Update2 is not going so well. 🙁 #vExpert pic.twitter.com/pGOlCGJIOF

— Tim Carman (@tpcarman) March 10, 2021

UPDATE (03/13/2021) - It looks like VMware has just pulled the ESXi online/offline depot and has updated KB 83063  to NOT recommend customers upgrade to ESXi 7.0 Update 2. A new patch is actively being developed and customers should hold off upgrading until that is made available.

UPDATE (03/10/2021) - VMware has just published KB 83063 which includes official guidance relating to the issue mentioned in this blog post.

Issue

It was not immediately clear to me on how folks were reaching this state and I had reached out to a few folks in the community to better understand their workflow. It turns out that the upgrade was being initiated from vCenter Server using vSphere Update Manager (VUM) and applying a custom ESXi 7.x Patch baseline to remediate. Upon reboot, the ESXi host would then hit the error as shown above.


Interestingly, I personally have only used Patch baselines for creating ESXi patches (e.g. 6.7p03, 7.0p01) and never for major ESXi upgrades. I normally would import the ESXi ISO and create an Upgrade baseline. At least from the couple of folks I spoke with, it seems like the use of Patch baseline is something they have done for some time and has never given them issues whether it was for a patch or major upgrade release.

Workaround

I also had some folks internally reach out to me regarding this issue and provided a workaround. At the time, I did not have a good grasp of what was going on. It turns out the community also figured out the same workaround, including how to recover an ESXi host which hits this error as you can not just go through recover workflow.

For those hitting the error above, you just need to create a bootable USB key with ESXi 7.0 Update 2 ISO using Rufus or Unetbootin. Boot the ESXi 7.0 Update 2 Installer and select the upgrade option which will fix the host.

To prevent this from happening, instead of creating or using a Patch baseline, create an Upgrade baseline using ESXi 7.0 Update 2 ISO. You will first need to go to Lifecycle Manager Management Interface in vCenter Server and under "Imported ISOs", import your iage.


Then create ESXi Upgrade baseline and select the desired ESXi ISO image and use this baseline for your upgrade.


I am not 100% sure, but I believe the reason for this change in behavior is mentioned in the ESXi 7.0 Update 2 release notes under "Patches contained in this Release" section which someone pointed me to. In any case, for major upgrades, I would certainly recommend using Upgrade baseline as that is something I have always used even when I was a customer back in the day.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, vSphere 7.0 Tagged With: vSphere 7.0 Update 2

VCSA 7.0 Update 2 Upgrade Issue – Exception occurred in install precheck phase

03/09/2021 by William Lam 16 Comments

Like most folks, I was excited about the release of vSphere 7.0 Update 2 and I was ready to upgrade my personal homelab, which was running on vSphere 7.0 Update 1c. However, after starting my VCSA upgrade in the VAMI UI, it quickly failed with the following error message: Exception occurred in install precheck phase

Joy … I just attempted to upgrade my VCSA (7.0u1c) in my personal homelab to #vSphere70Update2 and ran into “Exception occurred in install precheck phase” … pic.twitter.com/4mkvxHxdRl

— William Lam (@lamw) March 9, 2021

Given the release had just GA'ed less than an hour ago and everyone was probably hammering the site, I figured I would wait and then try again.

[Read more...] about VCSA 7.0 Update 2 Upgrade Issue – Exception occurred in install precheck phase

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VCSA, vSphere 7.0 Tagged With: vcsa, vSphere 7.0 Update 2

Apple NVMe driver for ESXi using new Community NVMe Driver for ESXi Fling 

02/23/2021 by William Lam 41 Comments

VMware has been making steady progress on enabling both the Apple 2018 Mac Mini 8,1 and the Apple 2019 Mac Pro 7,1 for our customers over the past couple of years. These enablement efforts have had its challenges, including the lack of direct hardware access for our developers and supporting teams due to the global pandemic but also the lack of participation from Apple has certainly not made this easier.

Today, I am happy to share that we have made some progress on enabling ESXi to see and consume the local Apple NVMe storage device found in the recent Apple T2-based mac systems such as the 2018 Mac Mini and 2019 Mac Pro. There were a number of technical challenges the team had to overcome, especially since the Apple NVMe was not just a consumer grade device but it also did not follow the standard NVMe specification that you normally would see in most typical NVMe devices.

This meant there was a lot of poking and prodding to reverse engineer the behavior of the Apple NVMe to better understand how this device works, which often leads to sudden reboot or PSODs. With the Apple NVMe being a consumer device, it also meant there were a number of workarounds that the team had to come up with to enable ESXi to consume the device. The implementation is not perfect, for example we do not have native 4kn support for SSD devices within ESXi and we had to fake/emulate a non-SSD flag to work around some of the issues. From our limited testing, we have also not observed any significant impact to workloads when utilizing this driver and we also had had several internal VMware teams who have already been using this driver for a couple of months now without reporting any issues.

A huge thanks goes out to Wenchao and Yibo from the VMkernel I/O team who developed the initial prototype which has now been incorporated into the new Community NVMe Driver for ESXi Fling.

Caveats

Before folks rush out to grab and install the driver, it is important to be aware of a couple of constraints that we have not been able to work around yet.

  1. ESXi versions newer then ESXi 6.7 Patch 03 (Build 16713306) is currently NOT supported and will cause ESXi to PSOD during boot up.
  2. The onboard Thunderbolt 3 ports does NOT function when using the Community NVMe driver and can cause ESXi to PSOD if activated.

Note: For detailed ESXi version and build numbers, please refer to VMware KB 2143832

VMware Engineering has not been able to pin point why the ESXi PSOD is happening. For now, this is a constraint to be aware of which may impact anyone who requires the use of the Thunderbolt 3 ports for additional networking or storage connectivity.

With that out of the way, customers can either incorporate the Community NVMe Driver for ESXi offline bundle into a new ESXi Image Profile (using vSphere Image Builder UI/CLI) and then exporting image as an ISO and then installing that on either a Mac Mini or Mac Pro or you can manually install the offline bundle after ESXi has been installed over USB and upon reboot, the local Apple NVME will then be visible for VMFS formatting.

Here is a screenshot of ESXi 6.7 Patch 03 installed on my 2018 Mac Mini with the Apple NVMe formatted with VMFS and running macOS VM

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, vSphere 6.7, vSphere 7.0 Tagged With: apple, mac mini, mac pro, NVMe

New Community Networking Driver for ESXi Fling

02/17/2021 by William Lam 13 Comments

I am super excited to announce the release of a new Community Networking Driver for ESXi Fling! The idea behind this project started about a year ago when we released an enhancement to the ne1000 driver as a community update which enabled ESXi to recognize the onboard network adapter for the Intel 10th Gen (Frost Canyon) NUC. Although the Intel NUC is not an officially supported VMware platform, it is extremely popular amongst the VMware Community. In working with the awesome Songtao, we were able to release this driver early last year for customers to take advantage of the latest Intel NUC release.

At the time, I knew that this would not be the last occurrence dealing with driver compatibility. We definitely wanted an easier way to distribute various community networking drivers that is packaged into a single deliverable for customers to easily consume and hence this project was born. In fact, it was quite timely as I had just received engineering samples of the new Intel NUC 11 Pro and Performance (Panther Canyon and Tiger Canyon) at the end of 2020 and work needed to be done before we could enable the onboard 2.5GbE (multi-gigabit) network adapter which is a default component of the new Intel Tiger Lake architecture. As reported back in early Jan, Songtao and colleague Shu were successful in getting ESXi to recognize the new 2.5GbE network adapter and has also been incorporated into this new Fling. In addition, we also started to receive reports from customers that after upgrading to a newer ESXi 7.0 releases, the onboard network adapters for the Intel 8th Gen NUC was no longer functioning. In an effort to help customers with this older platform, we have also updated the original community ne1000e driver to include the relevant PCI IDs within this Fling.


The new Community Networking Driver for ESXi is for PCIe-based network adapters and currently contains the following two driver modules:

  • igc-community - which adds support for Intel 11th Gen NUCs and any other hardware platform that uses the same 2.5GbE devices
  • e1000-community - which adds support for Intel 8th Gen NUC and any other hardware platform that uses same 1GbE devices

For a complete list of supported devices (VendorID/ProductID), please take a look at the Requirements tab on the Fling website. As with any Fling, this is being developed and supported in our spare time. In the future, we may consider adding other types of devices based on feedback from the broader community. I know Realtek-based PCIe NICs is something that many have been asking about and as mentioned back in this blog post, I have been in engaged with the Realtek team and hopefully in the near future, we may see an ESXi driver that can support some of the more popular devices in the community. If there are other PCIe-based networking adapters that could fit the Fling model, feel free to leave a comment on the Fling website and we can evaluate as time permits.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere 7.0 Tagged With: igc, Intel NUC, ne1000e

GPU passthrough with ESXi on the Apple 2019 Mac Pro 7,1

12/23/2020 by William Lam 12 Comments

The expandability of the Apple 2019 Mac Pro (7,1) has been the primary reason VMware customers have been so excited for this new platform for virtualizing macOS on ESXi. The most common request that I hear from customers is for GPU passthrough.

Although VMware does not officially support GPU passthrough, even for the existing Apple hardware systems on the VMware HCL, this has been a topic I have been keeping an eye out on, especially from what the VMware community is doing in this space.

My intention for this blog post is to provide a resource for the community on capturing the success and failures when attempting GPU passthrough on a 2019 Mac Pro. For those interested and have capable hardware, you may want to start with the VMware HCL for GPU passthrough devices listed under Virtual Dedicated Graphics Acceleration (vDGA). This may be your best chance to successfully passthrough a GPU that will be recognized by either a macOS or Linux/Windows guest operating system.

If you would like to share your experiences, feel free to leave a comment or reach out by filling the contact form.

Disclaimer: Although ESXi installs and runs on the Apple 2019 Mac Pro 7,1 it is currently not certified on the VMware HCL. There are no timelines on the certification due to challenges with COVID-19.

[Read more...] about GPU passthrough with ESXi on the Apple 2019 Mac Pro 7,1

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, vSphere 7.0 Tagged With: apple, GPU, mac pro

History of Cross vCenter Workload Migration Utility and its productization in vSphere 7.0 Update 1c (p02)

12/17/2020 by William Lam 17 Comments

I am super excited to share that the popular Cross vCenter Workload Migration Utility Fling has been officially productized and is now available with the release of vSphere 7.0 Update 1c (Patch 02)! The official name for this capability is now referred to as Advanced Cross vCenter vMotion, would that mean the short hand is Ax-vMotion? 🤔 In any case, this has literally been 5 years in the making from an idea that I had shared back in 2015 to now having it fully integrated as a native vSphere feature in 2020 is pretty wild!

While reflecting back and writing this blog post, I came across this tweet from our CEO, Pat Gelsinger, which I thought was quite fitting

I love this. Thanks for sharing. To me, execution is everything. It's much easier to have a good idea than it is to actually get it done. https://t.co/DAPdip6A8e

— Pat Gelsinger (@PGelsinger) November 24, 2020

I have learned over the years, that simply having a good idea is not enough. It takes hard work, time and perseverance.

It has been very humbling to work with so many of customers of all sizes and shapes and enabling them to take advantage of vMotion in a new way that would allow them to solve some of their unique business needs. vMotion is still as magical in 2020 as it was when VMware transformed the IT industry when it was first introduced.

🤯 WOW 🤯

~400TB migrated using the Cross vCenter Workload Migration @vmwflings 🔥

You win @vRobDowling 👏👏👏

I want to say the largest VM migration that I heard of with this tool was ~15K https://t.co/gfjGHQcJaE

— William Lam (@lamw) December 18, 2020

Of course this would not have been possible without the support of so many amazing VMware Engineers who contributed to the Fling including the original developer, Vishal Gupta who I had worked with as part of the VMware Cloud Foundation (VCF) team. After Vishal left VMware, I recruited a few more folks to help with the project including Vladimir Velikov, Vikas Shitole, Rajmani Patel, Plamen Semerdzhiev and Denis Chorbadjiyski. Lastly, I also want to thank Vishwa Srikaanth and Abhijith Prabhudev from the vSphere Product Management team who have been supportive of the Fling since day 1 and has been advocating with me on behalf of our customers.

[Read more...] about History of Cross vCenter Workload Migration Utility and its productization in vSphere 7.0 Update 1c (p02)

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 7.0 Tagged With: ExVC-vMotion, vmotion

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy