• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vSphere 7

Troubleshooting tips for configuring vSphere with Kubernetes

05/05/2020 by William Lam 9 Comments

With more and more folks trying out the new vSphere with Kubernetes capability, I have seen an uptick in questions both internally and externally around the initial setup of the infrastructure required for vSphere with Kubernetes but also during the configuration of a vSphere Cluster for Workload Management.

One of the most common question is why are there no vSphere Clusters listed or why a specific vSphere Cluster is showing up as Incompatible? There are a number of reasons that this can occur including vCenter Server not being able to communicate with NSX-T Manager to retrieve the list of NSX pre-checks which would cause the list to either be empty or listed as incompatible. Not having proper time sync between vCenter Server and NSX-T which can also manifest in a similar behavior among other infrastructure issues.


Having ran into some of these issues myself when developing my automation script, I figure it might be useful to share some of the troubleshooting tips I have used when trying to figure out what is going on whether that is during the initial setup or actually deploying workloads using vSphere with Kubernetes.

[Read more...] about Troubleshooting tips for configuring vSphere with Kubernetes

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, vSphere 7.0 Tagged With: Kubernetes, vSphere 7, vSphere with Kubernetes

Changing the default size of the ESX-OSData volume in ESXi 7.0

05/02/2020 by William Lam 12 Comments

In ESXi 7.0, a new partition scheme was introduced which also brings along a new set of storage requirements. These changes are explained in the official documentation here and the following VMware KB 77009 also contains some additional info which can be helpful. Storage changes are not easy but this was necessary to not only better support some of the current capabilities but more importantly, it setups the foundation for future ESXi capabilities.

The biggest change to the partition layout is the consolidation of VMware Tools Locker, Core Dump and Scratch partitions into a new ESX-OSData volume (based on VMFS-L). This new volume can vary in size (up to 138GB) depending on a number of factors including the current ESXi boot media (USB SD-Card, Local Disk) but also the size of the device itself, which is explained in the official documentation.

From some of the comments on Twitter, Reddit and the direct inquiries that I have received, this new behavior seems to be most impactful to smaller homelabs where a fresh install of ESXi 7.0 has been performed. Folks have shared that their ESX-OSData volume has taken up 120GB which can be quite significant if you have a smaller disk which can be quite common. I normally install ESXi on a USB device and I also use vSAN, which has a different behavior and I have also not upgraded my physical ESXi host (E200-8D) to 7.0 yet.

I performed a fresh installation of ESXi 7.0 (running as Nested ESXi VM) that was configured with 1TB of storage and here is what the filesystem layout now looks:


We can see that the ESX-OSData volume takes up ~119.75GB, which is not too bad for 1TB volume but I can understand this may not be ideal if you have something smaller such as 250GB to 512GB disk. Due to the size of the local device, the boot options mentioned in the KB would not be helpful and I was curious myself if this ESX-OSData volume size could be configurable. In doing some research it looks like the size of the ESX-OSData can be specified using the following ESXi boot option (SHIFT+O during the initial boot) called autoPartitionOSDataSize

UPDATE (12/17/20) - Official support for specifying the size of ESX-OSData has been added to the release of ESXi 7.0 Update 1c with a new ESXi kernel boot option called systemMediaSize which takes one of four values:

  • min = 25GB
  • small = 55GB
  • default = 138GB (default behavior)
  • max = Consumes all available space

If you do not require or have 138GB for the ESX-OSData, you can override the default behavior by appending this option with the specified value (e.g. systemMediaSize=min). It is worth noting that by using this setting, the smallest ESX-OSData volume you can configure is 25GB. For homelabs or environment which require less than this, you would have to use the unsupported autoPartitionOSDataSize parameter , which is not officially supported as mentioned below.

Disclaimer: This may not be officially supported by VMware as it deviates from the system defaults and can have other unintended behaviors. Use at your own risk.

[Read more...] about Changing the default size of the ESX-OSData volume in ESXi 7.0

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere 7.0 Tagged With: ESX-OSData, ESXi 7.0, vSphere 7

Deploying a minimal vSphere with Kubernetes environment

04/29/2020 by William Lam 9 Comments

A very useful property of automation is the ability to experiment. After creating my vSphere 7 with Kubernetes Automation Lab Deployment Script, I wanted to see what was the minimal footprint in terms of the physical resources but also the underlying components that would be required to allow me to still a fully functional vSphere with Kubernetes environment.

Before diving in, let me give you the usual disclaimer 😉

Disclaimer: This is not officially supported by VMware and you can potentially run into issues if you deviate from the official requirements which the default deployment script adheres to out of the box.

In terms of the physical resources, you will need a system that can provision up to 8 vCPU (this can be further reduced, see Additional Resource Reduction section below), 92GB memory and 1TB of storage (thin provisioned).


which translates to following configuration within the script:

  • 1 x Nested ESXi VM with 4 vCPU and 36GB memory
  • 1 x VCSA with 2 vCPU and 12GB memory
  • 1 x NSX-T Unified Appliance with 4 vCPU and 12GB memory
  • 1 x NSX-T Edge with 8 vCPU and 12GB memory

Note: You can probably reduce memory footprint of the ESXi VM further depending on your usage and the VCSA is using the default values for "Tiny", so you can probably trim the memory down a bit more.

Another benefit to this solution is by reducing the number of ESXi VMs required, it also speeds up the deployment and in just 35 minutes, you can have the complete infrastructure fully stood up and configured to try out vSphere with Kubernetes!


The other trick that I leveraged to reduce the amount of resources is by changing the default number of Supervisor Control Plane VMs required for enabling vSphere with Kubernetes. By default, three of these VMs are deployed as part of setting up the Supervisor Cluster, however I found a way to tell the Workload Control Plane (WCP) to only deploy two 🙂


This minimal deployment of vSphere with Kubernetes has already been incorporated into my vSphere with Kubernetes deployment script, but it does require altering several specific settings. You can find the instructions below.

[Read more...] about Deploying a minimal vSphere with Kubernetes environment

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, Not Supported, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7, vSphere with Kubernetes

Heads Up – Nested ESXi crashes in ESXi 7.0 running on older CPUs

04/17/2020 by William Lam 27 Comments

Thanks to Patrik Kernstock, who works in our Technical Support organization at VMware, for making me aware of an issue related to Nested ESXi running on an ESXi host that has been upgraded to ESXi 7.0. Several folks in the community have noticed after upgrading their Intel NUC 7th Gen and deploying a Nested ESXi VM and powering on an inner-guestOS would causes the Nested ESXi VM to crash.

Upon further investigation, it looks like this is not specific to the Intel NUC platform but rather with a specific generation of CPUs which are Intel Sky Lake-based and as a result, some customers are noticing this affect on their 7th Gen NUC.

UPDATE (06/23/20) - ESXi 7.0b has just been released and contains the fix for the Nested ESXi VM crash. If you are using an Intel NUC 10, do not just apply the patch as the updated ne1000 VIB within the patch will override existing Intel NIC driver causing the network adapter to no longer function. It is recommended that you download the patch and replace the default ne1000 VIB using Image Builder with the Intel NIC version before applying the update. To download the patch, please visit VMware Patch Portal site.

The good news is that this issue has already been reported and we should have a fix in a future update of ESXi. In the meantime, you can still run Nested ESXi and Nested Virtualization on these affected CPUs, you just will not be able to power on inner-guest VMs. Big thanks to Patrik for helping out with the testing and triaging this internally.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Nested Virtualization, Not Supported, vSphere 7.0 Tagged With: ESXi 7.0, Kaby Lake, Nested ESXi, Sky Lake, vSphere 7

Automated vSphere 7 and vSphere with Kubernetes Lab Deployment Script

04/13/2020 by William Lam 91 Comments

I know many of you have been asking me about my vSphere with Kubernetes automation script which I had been sharing snippets of on Twitter. For the past couple of weeks, I have been hard at work making the required changes between the vSphere 7 Beta and GA workflows, some additional testing and of course documentation. Hopefully the wait was worth it (I think it is) and if you enjoy the script or have benefited, please consider adding 🌟to the Github repo to show your support! Thanks and enjoy

Had to make some updates to one of my vGhetto Automated Lab Deployment Scripts

💥44min to automate all required #vSphere7 infrastructure! 🤛🎤🥳

1 x VCSA 7.0
3 x ESXi + vSAN 7.0
1 x NSX-T 3.0 UA
1 x NSX-T Edge

Need to clean up #ProjectPacific wording but its working great! pic.twitter.com/ZInPgVgbGS

— William Lam (@lamw) April 4, 2020

The Github repository:

  • https://github.com/lamw/vghetto-vsphere-with-kubernetes-external-nsxt-automated-lab-deployment

Before getting started, please carefully read through the requirements section along with the complete sample end-to-end execution if you are new to vSphere with Kubernetes. You will need to have a VMware Cloud Foundation (VCF) 4.0 license before you can get started and specifically an NSX-T Advance license which is one of the required parameters within the script. If you do not have access to a VCF 4 license, I strongly recommend taking part in the recent VMUG Advantage Homelab Group Buy effort which I had started to easily get access to the latest VMware releases along with a nice 15% discount!

The script supports deploying both a standard vSphere 7 environment with just VCSA, ESXi and vSAN as well as the complete solution which includes NSX-T to support vSphere with Kubernetes. For more details, please refer to the FAQ.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, Nested Virtualization, NSX, VMware Tanzu, VSAN, vSphere, vSphere 7.0 Tagged With: Kubernetes, NSX-T, Project Pacific, VMware Cloud Foundation, vSphere 7, vSphere with Kubernetes

New vCenter events for vSphere 7, VMware Cloud on AWS 1.10 and vSphere with Kubernetes

04/09/2020 by William Lam Leave a Comment

Last year I published a Github repo which lists all the vCenter Server Events for a default installation for both vSphere 6.7 Update 3 and VMware Cloud on AWS 1.9. Since every vSphere environment is going to be unique with various 2nd and 3rd party solutions, I have also included a small PowerCLI script in the blog that you can use to generate the list of events for your own deployment.

With the release of vSphere 7 and VMware Cloud on AWS 1.10, I thought it was time to update the repo to see what's new which can be useful in a number of scenarios including using these events with the popular vCenter Event Broker Appliance (VEBA) Fling.

  • vSphere 7 has a total of 1,778 vCenter events
  • VMware Cloud on AWS 1.10 has a total of 1,775 vCenter events

One thing worth pointing out with the introduction of vSphere with Kubernetes in vSphere 7, is there are also specific vCenter events, a total of 23 that are available and I am sure more will come in the future. Below is a quick summary which is also included in the Github repo.

[Read more...] about New vCenter events for vSphere 7, VMware Cloud on AWS 1.10 and vSphere with Kubernetes

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VMware Cloud on AWS, VMware Tanzu, vSphere 7.0 Tagged With: event, Kubernetes, VMware Cloud on AWS, vSphere 7

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy