• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

William Lam

Quick Tip – Using ESXi to send Wake-on-Lan (WoL) packet

03/05/2021 by William Lam 1 Comment

The ability to power on a system over the network using Wake-on-Lan (WoL) can be extremely useful, especially if you are not physically near the system or after a power outage. I personally have been using the wakeonlan utility on my macOS system for several years now.

The syntax is super easy, you just provide the MAC Address of your system:

wakeonlan 54:b2:03:9e:70:fc
Sending magic packet to 255.255.255.255:9 with 54:b2:03:9e:70:fc

I recently came to learn that ESXi itself has the ability to send a WoL packet from the ESXi Shell! This could be handy without having to install WoL client, especially if you have access to an ESXi host.

vsish -e set /net/tcpip/instances/defaultTcpipStack/sendWOL 192.168.30.255 9 54:b2:03:9e:70:fc vmk0

This uses the not supported vsish CLI to send WoL packet. The first argument is the network broadcast address, so if you have a network of 192.168.30.0/24, then the address would be 192.168.30.255. The second argument is a value of 9, which is probably related to the magic packet as you can see the same value from the wakeonlan utility abvoce. The third argument is the MAC Address of the system and finally the fourth and final argument is the ESXi VMkernel interface to send the packet out of.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi Tagged With: vsish, wake on lan, WOL

Decoding Services Roles/Permissions from a VMware Cloud Services Platform (CSP) Token

03/04/2021 by William Lam Leave a Comment

To programmatically access the various VMware Cloud Services (CSP) such as VMware Cloud on AWS as an example, a user must first generate a CSP Refresh Token using the CSP Console.


When creating a new CSP Refresh Token, you have the option to scope access to a specific set organization roles and service roles which will enable you to limit the permissions of this token to specific CSP Services. In the example below, I have created a new token which is scoped to the organization owner role along with two VMware Cloud on AWS Service Roles: Administrator (Delete Restricted) and NSX Cloud Admin to be able to grant access to a VMware Cloud on AWS SDDC.


One common issue that I see folks run into when working with some of the CSP Services including VMware Cloud on AWS from a programmatic standpoint is that they did not properly create a token with the correct permissions which usually will lead to some type of invalid request.

For popular services like VMware Cloud on AWS, it is usually pretty easy to track down, especially if the user who is using the CSP Refresh Token is the same person who created it. However, if you are not the person who created the original token or if you have forgotten or you may have access to multiple token, it can be a little bit difficult to troubleshoot.

The good news and probably lesser known detail about how CSP Refresh Tokens work is that you can actually decode these tokens to understand what specific scopes were used to create the initial token. Below are two methods to decode these tokens, both CSP Refresh Tokens (generated from the CSP UI) as well as CSP Access Token, which is returned when you request access providing your CSP Refresh Token.

[Read more...] about Decoding Services Roles/Permissions from a VMware Cloud Services Platform (CSP) Token

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VMware Cloud, VMware Cloud on AWS Tagged With: Access Token, JWT, Refresh Token, VMware Cloud, VMware Cloud on AWS

Easily create custom ESXi Images from patch releases using vSphere Image Builder UI

03/01/2021 by William Lam 5 Comments

Creating a custom ESXi Image Profile that incorporates additional ESXi drivers such as the recently released Community Networking Driver for ESXi Fling or Community NVMe Driver for ESXi Fling is a pretty common workflow. Due to the infrequency of this activity, many new and existing users sometime struggle with the process to quickly construct a new custom ESXi Image Profile. I personally prefer to use the Image Builder UI that is built right into the vSphere UI as part of vCenter Server.

There are a couple of ways to create a custom new ESXi Image Profile using the Image Builder UI, but the easiest method is to use the Clone workflow, which is especially helpful when you are selecting an ESXi patch release as your base image.

With a regular major release, you only have to deal with two image profiles: standard (includes VMware Tools) and no-tools (does not include VMware Tools).

With an ESXi patch release, you actually have four image profiles: standard (includes VMware Tools + all bug/security fixes), security standard (includes VMware Tools + security fixes only), security no-tools (does not include VMware Tools + security fixes only) and no-tools (does not include VMware Tools + all bug fixes)

If you start with an empty custom image profile and then select your ESXi base image, you will notice there are multiple VIB version packages to select from since patch release you had imported earlier actually contains four different ESXi image profiles. Below are a step by step instructions on using the cloning workflow since this is a question I get from users who run into package conflicts not realizing they have selected the same package multiple times.

[Read more...] about Easily create custom ESXi Images from patch releases using vSphere Image Builder UI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere Tagged With: image builder, image profile

Apple NVMe driver for ESXi using new Community NVMe Driver for ESXi Fling 

02/23/2021 by William Lam 40 Comments

VMware has been making steady progress on enabling both the Apple 2018 Mac Mini 8,1 and the Apple 2019 Mac Pro 7,1 for our customers over the past couple of years. These enablement efforts have had its challenges, including the lack of direct hardware access for our developers and supporting teams due to the global pandemic but also the lack of participation from Apple has certainly not made this easier.

Today, I am happy to share that we have made some progress on enabling ESXi to see and consume the local Apple NVMe storage device found in the recent Apple T2-based mac systems such as the 2018 Mac Mini and 2019 Mac Pro. There were a number of technical challenges the team had to overcome, especially since the Apple NVMe was not just a consumer grade device but it also did not follow the standard NVMe specification that you normally would see in most typical NVMe devices.

This meant there was a lot of poking and prodding to reverse engineer the behavior of the Apple NVMe to better understand how this device works, which often leads to sudden reboot or PSODs. With the Apple NVMe being a consumer device, it also meant there were a number of workarounds that the team had to come up with to enable ESXi to consume the device. The implementation is not perfect, for example we do not have native 4kn support for SSD devices within ESXi and we had to fake/emulate a non-SSD flag to work around some of the issues. From our limited testing, we have also not observed any significant impact to workloads when utilizing this driver and we also had had several internal VMware teams who have already been using this driver for a couple of months now without reporting any issues.

A huge thanks goes out to Wenchao and Yibo from the VMkernel I/O team who developed the initial prototype which has now been incorporated into the new Community NVMe Driver for ESXi Fling.

Caveats

Before folks rush out to grab and install the driver, it is important to be aware of a couple of constraints that we have not been able to work around yet.

  1. ESXi versions newer then ESXi 6.7 Patch 03 (Build 16713306) is currently NOT supported and will cause ESXi to PSOD during boot up.
  2. The onboard Thunderbolt 3 ports does NOT function when using the Community NVMe driver and can cause ESXi to PSOD if activated.

Note: For detailed ESXi version and build numbers, please refer to VMware KB 2143832

VMware Engineering has not been able to pin point why the ESXi PSOD is happening. For now, this is a constraint to be aware of which may impact anyone who requires the use of the Thunderbolt 3 ports for additional networking or storage connectivity.

With that out of the way, customers can either incorporate the Community NVMe Driver for ESXi offline bundle into a new ESXi Image Profile (using vSphere Image Builder UI/CLI) and then exporting image as an ISO and then installing that on either a Mac Mini or Mac Pro or you can manually install the offline bundle after ESXi has been installed over USB and upon reboot, the local Apple NVME will then be visible for VMFS formatting.

Here is a screenshot of ESXi 6.7 Patch 03 installed on my 2018 Mac Mini with the Apple NVMe formatted with VMFS and running macOS VM

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, vSphere 6.7, vSphere 7.0 Tagged With: apple, mac mini, mac pro, NVMe

VMware customer production use cases for Intel NUC 

02/19/2021 by William Lam 3 Comments

The Intel NUC also known as the Next Unit of Computing is a very popular platform for running VMware based homelabs. I have been working with the Intel NUCs since 2016 with their 6th Generation model when I decided to rebuild my personal home lab. Since then I have continued my efforts to ensure that vSphere continues to run extremely well on this amazing little platform even if it is not officially supported by VMware, which now also includes the latest 11th Generation (Tiger and Panther Canyon NUCs).

At the end of last year, I came across this fascinating Intel NUC documentary that was put together by Robtech, which I highly recommend a watch.

While listening to some of the use cases that SimplyNUC had observed over the years which has spanned land ⛰️, air 🛫, sea 🛳️ and space 🚀, it got me thinking about some of the use cases that I had come across while talking to our VMware customers.

Disclaimer: The Intel NUC is not officially supported by VMware and therefore they are not listed on the VMware HCL

A common misconception is that Intel NUCs are only useful for homelab purposes and has no place for running production workloads, which is just simply not true. Here are some of the common use cases that I have seen over the years, most of which are deployed at the Edge/ROBO:

  • vSphere Development/Testing, Education and Training
  • Retail, Grocery, Industrial Factories and Ships
  • Build Automation (CI/CD)
  • Telco/NFV (e.g. Network/Hardware monitoring)
  • Virtual Desktop Infrastructure (VDI)

I also wanted to take this opportunity and to share some of the stories on how some of our customers have taken advantage of this platform, even though it is not officially supported by VMware and some of the underlying business drivers. Hopefully these stories will educate, resonate and perhaps even inspire other customers to explore different computing platforms, especially at the Edge where constraints and requirements will differ quite significantly when compared to a typical Enterprise Datacenter.

If you would like to share your story of how you are using Intel NUC and VMware for production, feel free to reach out using the contact page.

[Read more...] about VMware customer production use cases for Intel NUC 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: vSphere Tagged With: Edge, esxi, Intel NUC, ROBO

New Community Networking Driver for ESXi Fling

02/17/2021 by William Lam 13 Comments

I am super excited to announce the release of a new Community Networking Driver for ESXi Fling! The idea behind this project started about a year ago when we released an enhancement to the ne1000 driver as a community update which enabled ESXi to recognize the onboard network adapter for the Intel 10th Gen (Frost Canyon) NUC. Although the Intel NUC is not an officially supported VMware platform, it is extremely popular amongst the VMware Community. In working with the awesome Songtao, we were able to release this driver early last year for customers to take advantage of the latest Intel NUC release.

At the time, I knew that this would not be the last occurrence dealing with driver compatibility. We definitely wanted an easier way to distribute various community networking drivers that is packaged into a single deliverable for customers to easily consume and hence this project was born. In fact, it was quite timely as I had just received engineering samples of the new Intel NUC 11 Pro and Performance (Panther Canyon and Tiger Canyon) at the end of 2020 and work needed to be done before we could enable the onboard 2.5GbE (multi-gigabit) network adapter which is a default component of the new Intel Tiger Lake architecture. As reported back in early Jan, Songtao and colleague Shu were successful in getting ESXi to recognize the new 2.5GbE network adapter and has also been incorporated into this new Fling. In addition, we also started to receive reports from customers that after upgrading to a newer ESXi 7.0 releases, the onboard network adapters for the Intel 8th Gen NUC was no longer functioning. In an effort to help customers with this older platform, we have also updated the original community ne1000e driver to include the relevant PCI IDs within this Fling.


The new Community Networking Driver for ESXi is for PCIe-based network adapters and currently contains the following two driver modules:

  • igc-community - which adds support for Intel 11th Gen NUCs and any other hardware platform that uses the same 2.5GbE devices
  • e1000-community - which adds support for Intel 8th Gen NUC and any other hardware platform that uses same 1GbE devices

For a complete list of supported devices (VendorID/ProductID), please take a look at the Requirements tab on the Fling website. As with any Fling, this is being developed and supported in our spare time. In the future, we may consider adding other types of devices based on feedback from the broader community. I know Realtek-based PCIe NICs is something that many have been asking about and as mentioned back in this blog post, I have been in engaged with the Realtek team and hopefully in the near future, we may see an ESXi driver that can support some of the more popular devices in the community. If there are other PCIe-based networking adapters that could fit the Fling model, feel free to leave a comment on the Fling website and we can evaluate as time permits.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere 7.0 Tagged With: igc, Intel NUC, ne1000e

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 206
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy