• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vSphere 7.0

GPU passthrough with ESXi on the Apple 2019 Mac Pro 7,1

12/23/2020 by William Lam 11 Comments

The expandability of the Apple 2019 Mac Pro (7,1) has been the primary reason VMware customers have been so excited for this new platform for virtualizing macOS on ESXi. The most common request that I hear from customers is for GPU passthrough.

Although VMware does not officially support GPU passthrough, even for the existing Apple hardware systems on the VMware HCL, this has been a topic I have been keeping an eye out on, especially from what the VMware community is doing in this space.

My intention for this blog post is to provide a resource for the community on capturing the success and failures when attempting GPU passthrough on a 2019 Mac Pro. For those interested and have capable hardware, you may want to start with the VMware HCL for GPU passthrough devices listed under Virtual Dedicated Graphics Acceleration (vDGA). This may be your best chance to successfully passthrough a GPU that will be recognized by either a macOS or Linux/Windows guest operating system.

If you would like to share your experiences, feel free to leave a comment or reach out by filling the contact form.

Disclaimer: Although ESXi installs and runs on the Apple 2019 Mac Pro 7,1 it is currently not certified on the VMware HCL. There are no timelines on the certification due to challenges with COVID-19.

[Read more...] about GPU passthrough with ESXi on the Apple 2019 Mac Pro 7,1

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, vSphere 7.0 Tagged With: apple, GPU, mac pro

History of Cross vCenter Workload Migration Utility and its productization in vSphere 7.0 Update 1c (p02)

12/17/2020 by William Lam 11 Comments

I am super excited to share that the popular Cross vCenter Workload Migration Utility Fling has been officially productized and is now available with the release of vSphere 7.0 Update 1c (Patch 02)! The official name for this capability is now referred to as Advanced Cross vCenter vMotion, would that mean the short hand is Ax-vMotion? 🤔 In any case, this has literally been 5 years in the making from an idea that I had shared back in 2015 to now having it fully integrated as a native vSphere feature in 2020 is pretty wild!

While reflecting back and writing this blog post, I came across this tweet from our CEO, Pat Gelsinger, which I thought was quite fitting

I love this. Thanks for sharing. To me, execution is everything. It's much easier to have a good idea than it is to actually get it done. https://t.co/DAPdip6A8e

— Pat Gelsinger (@PGelsinger) November 24, 2020

I have learned over the years, that simply having a good idea is not enough. It takes hard work, time and perseverance.

It has been very humbling to work with so many of customers of all sizes and shapes and enabling them to take advantage of vMotion in a new way that would allow them to solve some of their unique business needs. vMotion is still as magical in 2020 as it was when VMware transformed the IT industry when it was first introduced.

🤯 WOW 🤯

~400TB migrated using the Cross vCenter Workload Migration @vmwflings 🔥

You win @vRobDowling 👏👏👏

I want to say the largest VM migration that I heard of with this tool was ~15K https://t.co/gfjGHQcJaE

— William Lam (@lamw) December 18, 2020

Of course this would not have been possible without the support of so many amazing VMware Engineers who contributed to the Fling including the original developer, Vishal Gupta who I had worked with as part of the VMware Cloud Foundation (VCF) team. After Vishal left VMware, I recruited a few more folks to help with the project including Vladimir Velikov, Vikas Shitole, Rajmani Patel, Plamen Semerdzhiev and Denis Chorbadjiyski. Lastly, I also want to thank Vishwa Srikaanth and Abhijith Prabhudev from the vSphere Product Management team who have been supportive of the Fling since day 1 and has been advocating with me on behalf of our customers.

[Read more...] about History of Cross vCenter Workload Migration Utility and its productization in vSphere 7.0 Update 1c (p02)

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 7.0 Tagged With: ExVC-vMotion, vmotion

Quick Tip – Easily identify source DHCP server using ESXi DCUI

11/20/2020 by William Lam Leave a Comment

While installing the ESXi 7.0 Update 1 on one of my physical system, I happened to be in the "Configure Management Network" section of the ESXi Direct Console UI (DCUI) and noticed something I had never seen before. As shown in the screenshot, it now shows the IP Address of the DHCP server in which ESXi received the DHCP lease.


I had not noticed this before and after asking on Twitter, it looks like this is definitely a new enhancement that was added fairly recently. I did not see this in one of my ESXi 6.7 Update 3 deployments, but it may have came in a later patch but definitely new in ESXi 7.0 or greater. Not only is this a quick and easy way to identify the DHCP server being used but in case you need to track down an unexpected rogue DHCP server running, this will certainly come in handy as pointed out by John.

Trying to get rogue DHCP servers under control?
Remember kids, DHCP Snooping saves lives! https://t.co/FKPgKzI9In

— John Nicholson (@Lost_Signal) November 20, 2020

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, vSphere 6.7, vSphere 7.0 Tagged With: dcui, dhcp

Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

11/10/2020 by William Lam 1 Comment

A few months back I saw that HashiCorp had released a new Kubernetes (K8s) Provider for Terraform, currently in Alpha state, which enable users to deploy K8s resources using the popular Infrastructure-as-Code (IaC) tool. I thought this would be pretty cool if it works with our vSphere with Tanzu solution, since the Tanzu Kubernetes Grid (TKG) Service uses ClusterAPI via a custom VM Operator to deploy TKG Guest Clusters which is just a fancy way of saying it uses K8s API to deploy more K8s 🙂

The setting up the new K8s provider was pretty straight forward and after spending a few minutes in figuring out how to convert my existing TKG YAML to the required HCL format for Terraform to understand, I was able to to run a terraform "plan" but quickly ran into the following error:

failed: admission webhook "default.mutating.tanzukubernetescluster.run.tanzu.vmware.com" does not support dry run

It looks like our tanzukubernetescluster admission webhooks does not currently support dry run operations which can be quite useful but also common when using Terraform. I figured this was the end of that idea and I ended up just filing a feature enhancement internally for adding this support in the future as I can see this being quite useful for our customers.

After finishing up recent pet project of getting a fully functional vSphere with Tanzu on a homelab budget and just using 32GB of memory, I decided to take another look at this and discovered the required tweak to get this working was super trivial, literally a single line change.

Disclaimer: This is not officially supported by VMware, use at your own risk.

[Read more...] about Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Kubernetes, Tanzu Kubernetes Grid, Terraform, vSphere with Tanzu

Complete vSphere with Tanzu homelab with just 32GB of memory!

11/09/2020 by William Lam 26 Comments

Since the release of vSphere 7.0 Update 1, the demand and interests from the community on getting hands on with vSphere with Tanzu and the new simplified networking solution, has been non-stop. Most folks are either upgrading their existing homelab or looking to purchase new hardware that can better support the new features of the vSphere 7.0 release.

Although vSphere with Tanzu now has a flavor that does not require NSX-T which helps reduces the barrier on getting started, it still has some networking requirements which may not be easily met in for all lab environments. In fact, this was actually the primary reason I had started to look into this since my personal homelab network is very basic and I do not have nor want a switch that can support multiple VLANs, which is one of the requirements for vSphere with Tanzu.

While investigating for a potential solution, which included way too MANY hours of debugging and troubleshooting, I also thought about the absolute minimal amount of resources I could get away with after put everything together. To be clear, my homelab is comprised of a single Supermicro E200-8D which has 128GB of memory and that has served me well over the years and I highly recommend it for anyone that can fit that into their budget. With that said, I did set out with a pretty aggressive goal of using something that is pretty common in VMware homelabs which is an Intel NUC and with just 32GB of memory.

Here is the hardware BOM (similar hardware should also work):

  • Intel NUC 10i7FNH
  • 32GB memory
  • Single 250GB M.2 NVMe SSD
    • NUC can support two SSD (M.2 + SATA), you can always go larger

Here is the software BOM:

  • vCenter Server Appliance 7.0 Update 1 Build 16860138
  • ESXi 7.0 Update 1 Build 16850804
  • HAProxy v0.1.8 OVA
  • Photon OS 3.0 OVA

Note: The Intel NUCs (Gen 6 to 10) can all support up to 64GB of memory and this is one of the best upgrades you can give yourself, but if you only have 32GB of memory, this will also work.

The final solution will comprise of the following:

  • 1 x vCenter Server Appliance (VCSA) running on the Intel NUC self-managing the ESXi host
  • VMFS storage will be used instead of vSAN to reduce memory footprint (If you have 64GB of memory, recommend using vSAN)
  • Onboard NIC will be used for all traffic and will be attached to a Distributed Virtual Switch (VDS)
    • 3 x Distributed Portgroups will be configured on top of your existing LAN network, the latter two will be routed through our Photon OS Router VM
      • Management - Existing LAN network
      • Frontend - 10.10.0/24
      • Workload - 10.20.0.0/24
  • 1 x vSphere with Tanzu Cluster enabled with Workload Management
  • 1 x HAProxy VM deployed using 3-NIC configuration
  • 1 x Photon OS Linux VM used as a Router for IP forwarding and optionally, a DNS server if you do not already have one
  • 9 x IP Addresses in total will be required from your local LAN network
    • 4 x IP Addresses which should map to following hostnames or similiar
      • esxi-01.tanzu.local
      • vcsa.tanzu.local
      • router.tanzu.local
      • haproxy.tanzu.local
    • 5 x IP Addresses in a consecutive block (e.g. 192.168.30.20-192.168.30.25) will be needed for the Supervisor Control Plane VMs


As part of this solution, I have automated as much of the tasks as possible and all scripts used for this solution can be found at https://github.com/lamw/vsphere-with-tanzu-homelab-scripts which I will be referencing throughout the instructions. There are also a number of techniques and tricks I am using to be able to reduce the overall memory footprint for setting up vSphere with Tanzu, obviously these should not be used in a Production grade environment.

I also want to give a huge thanks to Timo Sugliani for all of his help with the networking question/challenges and Mayank B. from the vSphere with Tanzu Engineering team who helped with the debugging and ultimately making this solution a possibility.
[Read more...] about Complete vSphere with Tanzu homelab with just 32GB of memory!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Home Lab, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: HAProxy, Intel NUC, Kubernetes, vSphere with Tanzu

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

10/28/2020 by William Lam Leave a Comment

Happy to share that the Tanzu Kubernetes Grid (TKG) Demo Appliance Fling has been updated to support the latest TKG 1.2.0 release which just came out a couple of weeks ago. The TKG Workshop Guide has been updated to reflect all new TKG 1.2 changes along with an updated vSphere Content Library containing all the OVA required to get started. As mentioned in the workshop guide, you can use either a VMware Cloud on AWS SDDC (1-Node) or a vSphere 6.7 Update 3/vSphere 7.0+ environment.

The most notable change with this version is actually within TKG itself which now uses kube-vip to replace the functionality that the HAProxy VM used to provide. What this means when deploying either a TKG Management or Workload Cluster is that you will need to specify an IP Address which will be used for the Virtual IP endpoint of the K8s Cluster as shown in the screenshot below.

tkg init -i vsphere -p dev --name tkg-mgmt --vsphere-controlplane-endpoint-ip 192.168.2.10


Using the TKG Demo Appliance, you can deploy both v1.19.1 and v1.18.8 K8s Clusters. To exercise a TKG Cluster upgrade workflow, you just have to run these three simple commands:

export VSPHERE_TEMPLATE=photon-3-kube-v1.18.8_vmware.1
tkg create cluster tkg-cluster-01 --plan=dev --kubernetes-version=v1.18.8+vmware.1 --vsphere-controlplane-endpoint-ip 192.168.2.11
tkg upgrade cluster tkg-cluster-01


There has been a lot of demand for TKG on VMware Cloud on AWS, so that is where I have spent the bulk of my testing not to mention where it was originally developed. You can also deploy the TKG Demo Appliance in an on-premises vSphere environment running 6.7 Update 3 or newer.

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Cloud on AWS, VMware Tanzu, vSphere 6.7, vSphere 7.0 Tagged With: Tanzu Kubernetes Grid, VMware Cloud on AWS, vSphere 6.7, vSphere 7.0

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy