• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

esxi

Workaround for ESXi-Arm in vSphere 7.0 Update 1

10/12/2020 by William Lam 4 Comments

In vSphere 7.0 Update 1, a new capability was introduced called the vCenter Cluster Services (vCLS) which provides a new framework for decoupling and managing distributing control plane services for vSphere. To learn more, I highly recommend the detailed blog post linked above by Niels. In addition, Duncan also has a great blog post about common question/answers and considerations for vCLS, which is definitely worth a read as well.

vSphere DRS is one of the vSphere features which relies on this new vCLS service and this is made possible by the vCLS VMs which are deployed automatically when it detects there are ESXi hosts within a vSphere Cluster (regardless if vSphere DRS is enabled or not). For customers who may be using the ESXi-Arm Fling with a vSphere 7.0 Update 1 environment, you may have noticed continuous "Delete File" tasks within vCenter that seems to loop forever.

This occurs because the vCLS service will first test to see if it can upload a file to the datastore, once it can, it will delete it. The issue is that the vCLS VMs are x86 and can not be deployed to an ESXi-Arm Cluster as the CPU architecture is not supported. There is a workaround to disable vCLS for the ESXi-Arm Cluster, which I will go into shortly. However, because vCLS can not properly deploy, it means vSphere DRS capabilities will not be possible when using vSphere 7.0 Update 1 with ESXi-Arm hosts. If this is desirable, it is recommended that to use either vSphere 7.0c or vSphere 7.0d if you wish to use vSphere DRS.

Note: vSAN does not rely on vCLS to function but to be able to use it, you must place your ESXi-Arm hosts into a vSphere Cluster and hence applying this workaround would be desirable for that use case.

[Read more...] about Workaround for ESXi-Arm in vSphere 7.0 Update 1

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, vSphere 7.0 Tagged With: Arm, esxi, vCenter Clustering Services, vCLS, vSphere 7.0 Update 1

Using ESXi-Arm Fling as a lightweight vSphere Automation environment for PowerCLI and Terraform

10/09/2020 by William Lam 1 Comment

A set of use cases that I was really excited for when I first heard about ESXi-Arm a few years ago was around the topic of vSphere Automation and Development. I speak with many customers who are just starting out on their Automation journey whether that is using PowerCLI, one of our many vSphere Automation SDK or even directly to the new vCenter REST API which all new features are being exposed through these days.

One of the biggest challenge for new comers is simply getting access to hardware that they can start playing around with and although there are is plethora of vSphere Homelab choices, it does require some amount of investment, which is definitely worth it in the long run. However, if you are just getting started and maybe you want something that is a bit more lighter weight, there are not too many options outside of an Intel NUC. I know many consultants actually carry around an Intel NUC that contains several VM images that they use to with their clients, including demos.

With the small form factor, low cost and reduced power consumption of the Raspberry Pi, I think this really opens up the door for some interesting creative solutions:

  • Basic vSphere footprint that can be used for work or learning purposes
  • Easy way to learn and explore the vSphere API with an actual host and enabling real VM deployments
  • Trying out Infrastructure-as-Code (IaC) tools such as Terraform and Ansible
  • Quick way to run through basic demos in front of customers
  • On-demand and self-contained lab environment for small Hackathon at your local VMUG or even at VMworld

Something I was really interested in early on was to be able to use ESXi-Arm with the Raspberry Pi to not only have a basic ESXi environment but also have PowerCLI environment up and running in an Arm VM. My first thought was to get this setup using Photon OS, which not only has Arm distribution but also has support for Powershell and PowerCLI. I was hoping with some tinkering, I could easily get Powershell for Arm to run on PhotonOS (which it did) but I then ran into issues installing PowerCLI itself.

I decided to give up for now and take a look at Ubuntu which also supports Powershell for Arm, but the Microsoft documentation only listed instructions for 32-bit and ESXi-Arm requires a 64-bit. Taking a look at the Powershell release files, I noticed there was 64-bit package and with a few minor adjustments to the commands, I got PowerCLI installed and connected back to my rPI which was attached to my x86 vCenter Server!

[Read more...] about Using ESXi-Arm Fling as a lightweight vSphere Automation environment for PowerCLI and Terraform

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi-Arm, PowerCLI, vSphere Tagged With: Arm, esxi, PowerCLI, Terraform

vSAN Witness using Raspberry Pi 4 & ESXi-Arm Fling

10/08/2020 by William Lam 32 Comments

As hinted in my earlier blog post, you can indeed setup a vSAN Witness using the ESXi-Arm Fling running on a Raspberry Pi (rPI) 4b (8GB) model. In fact, you can even setup a standard 2-Node or 3-Node vSAN Cluster using the exact same technique. For those familiar with vSAN and the vSAN Witness, we will need to have at least two storage devices for the caching and capacity tier.

For the rPI, this means we are limited to using USB storage devices and luckily, vSAN can actually claim and consume USB storage devices. For a basic homelab, this is probably okay but if you want something a bit more reliable, you can look into using a USB 3.0 to M.2 NVMe chassis. The ability to use an M.2 NVMe device should definitely provide more resiliency compared to a typical USB stick you might have lying around. From a capacity point of view, I had two 32GB USB keys that I ended up using which should be plenty for a small setup but you can always look at purchasing large capacity given how cheap USB devices are.

Disclaimer: ESXi-Arm is a VMware Fling which means it is not a product and therefore it is not officially supported. Please do not use it in Production.

With the disclaimer out of the way, I think this is a fantastic use case for an inexpensive vSAN Witness which could be running at a ROBO/Edge location or simply supporting your homelab. The possibilities are certainly endless and I think this is where the ESXi-Arm team would love to hear whether this is something customers would even be interested in and please share your feedback to help with priorities for both the ESXi-Arm and vSAN team.

In my setup, I have two Intel NUC 9th Pro which make up my 2-Node vSAN Cluster and then an rPI as my vSAN Witness. Detailed instructions can be found below including a video for those wanting to see vSAN Witness in action by actually powering on an actual workload 😀

[Read more...] about vSAN Witness using Raspberry Pi 4 & ESXi-Arm Fling

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, VSAN, vSphere Tagged With: Arm, esxi, Raspberry Pi, witness

My Raspberry Pi 4 BOM for ESXi-Arm Fling

10/07/2020 by William Lam 12 Comments

With the release of the highly anticipated ESXi-Arm Fling, I thought it would be useful to share my hardware build-of-materials that I am currently using with ESXi-Arm Fling, especially as folks have been asking about what is possible or things to be aware of before purchasing a Raspberry Pi.

I do want to stress that the components listed below is just one of many options, it is highly recommend folks carefully review the ESXi-Arm Fling documentation to understand which accessories are supported along with some of their constraints prior to making a purchase.


Devices from Left to Right:

[Read more...] about My Raspberry Pi 4 BOM for ESXi-Arm Fling

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, vSphere Tagged With: Arm, esxi, Raspberry Pi

ESXi on Arm Fling is LIVE!

10/06/2020 by William Lam 17 Comments


The highly anticipated ESXi on Arm Fling has just been announced and is NOW generally available as a new VMware Fling! Head over to https://flings.vmware.com/esxi-arm-edition and be sure to carefully read through the Requirements and documentation before try out the bits.

History

Although ESXi-Arm was publicly demo'ed at VMworld Europe 2018 during the closing keynote by Ray O'Farrell (former CTO of VMware), the reality was there was a ton more to do before ESXi-Arm could be a reality for VMware customers. The newly formed ESXi-Arm team at VMware has been hard at work these last couple of years working with both Arm and its eco-system in extending hardware standards, firmware standards (open contribution to UEFI), and certification beyond the existing Arm server ecosystem, which enabled us to support platforms like SmartNICs and the ubiquitous Raspberry Pi. This is just a glimpse into what it took to get where we are at today.

I am also excited to share that the Virtually Speaking Podcast crew has invited us back for an exclusive episode featuring both Andrei Warkentin and myself to dive deeper into the development of ESXi-Arm project at VMware. This is an episode you will not want to miss!

Hardware

The ESXi-Arm Fling supports a number of different Arm platforms ranging from a traditional Datacenter form-factor to both Near and Far Edge systems including the highly requested Raspberry Pi (rPI)! With the rPI, only the 4b model will be supported and although both the 4GB and 8GB memory model works with ESXi-Arm. We highly recommend folks invest in the 8GB model to be able to take advantage of more vSphere features and be able to run more workloads.


For a complete list of supported Arm hardware platforms, please refer to the Requirements section of the Fling website. If there are other platforms you would like to see get added, do not hesitate to either leave a comment here and/or post directly on the ESXi-Arm Fling page.

vCenter Support

For customers with an existing x86 vCenter Server or those that would like to deploy a new vCenter Server, you will be able to attach and manage ESXi-Arm hosts just like you normally would as long as you are using vCenter Server 7.0 or greater.


We expect the majority of vSphere platform features to "just work" like vMotion but there may be some features that may not work or have additional requirements.


For example, to enable vSphere HA and/or vSphere FT, the Fault Domain Manager (FDM) Client VIB must be installed on an ESXi-Arm host. Today, this VIB is distributed as part of vCenter Server and only x86 version of the client is available. We do provide FDM Client VIBs for ESXi-Arm as part of the ESXi-Arm Fling, but support will be limited to vCenter Server 7.0c and 7.0d. For detailed instructions, please refer to the ESXi-Arm documentation.

VMware Tools

VMware Tools for ESXi-Arm GuestOS is not bundled as part of ESXi-Arm Fling, but can be installed. To do so, you will need to compile open-vm-tools for your respective GuestOS. Instructions can be found in the ESXi-Arm Fling documentation and below, you can see a screenshot of VMware Tools for Arm successfully running on Ubuntu 20.04 GuestOS running on ESXi-Arm on the rPI 4.

vSAN Witness

Lastly, a popular use case that has been brought up when ESXi-Arm was initially demo'ed was the use of the rPI as an inexpensive vSAN Witness, which is a fantastic use case for ROBO & Edge locations. I am very happy to share that using an rPI 8GB as a vSAN Witness works! As you can see from the screenshot below, I have two physical Intel NUC 9th Pro configured in a 2-Node vSAN Cluster and I am using the rPI as vSAN Witness 😀


In case this was not clear, this is NOT officially supported but it does demonstrate the viability of this concept and and feedback from the our users would help drive the priority and the potential support for such a configuration. More details will be shared in a future blog post outlining the instructions on using rPI as vSAN Witness. Stay tuned!

As you can see, this is just a small taste of what can be done with the ESXi-Arm Fling and the possibilities are truly endless! The ESXi-Arm team is very excited to see what the community will do with the ESXi-Arm Fling, what type of use cases are you solving or workloads that you are running. Below are a few ways in how you can engage with the ESXi-Arm team and community.

ESXi-Arm Engagement

  • For general questions/issues, please leave a Comment and/or file Bug on the ESXi-Arm Fling site
  • Follow ESXi-Arm on Twitter: @esxi_arm
  • Follow the official ESXi-Arm Blog: https://blogs.vmware.com/arm
  • Chat with the ESXi-Arm team and community on Slack: #esxi-arm-fling on VMware {code}
  • For other inquiries or engaging with ESXi on ARM Product Team, you can send an email to esxionarm [at] vmware [dot] com
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, VSAN, vSphere Tagged With: Arm, esxi, Raspberry Pi, witness

USB Network Native Driver Fling for ESXi v1.6

08/26/2020 by William Lam 10 Comments

The popular USB Native Driver Fling for ESXi has just been updated to version 1.6 and is one of our larger releases.

Here are some of the key new features, for complete list, please refer to the Changelog tab on the Fling site.

  • Support for 4 additional USB NICs including the highly requested RTL8156 which is a 2.5GbE USB NIC and can be found on Amazon for as low as $25 USD. For more details, please refer to Requirements tab on the Fling site.
  • Support for persisting VMkernel to USB NIC MAC Address mappings which was an issue when using multiple USB NICs. Upon reboot, ESXi may randomize the mappings which can cause issues. For more details on this feature, please refer to the Instructions tab on the Fling site.
  • Simplified method for persisting USB NIC bindings. For more details, please refer to the Instructions tab on the Fling site.
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab Tagged With: esxi, fling, usb network adapter

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 25
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy