• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Search Results for: vSphere with Kubernetes

Is vSphere with Kubernetes available for evaluation? 

07/14/2020 by William Lam Leave a Comment

Yes. Given the frequency that this question has come up, I thought it would be useful to share some more details on how you can start playing with the new vSphere with Kubernetes (K8s) capability which was introduced as part of the vSphere 7.0 release. vSphere w/K8s requires NSX-T and although vSphere (ESXi and vCenter Server Appliance) has supported a 60 day evaluation period, NSX-T historically did not support any self-service evaluation. In addition, there were also some confusion in how vSphere w/K8s was bundled today from a packaging standpoint which is offered as part of the VMware Cloud Foundation (VCF) 4.0 SKU.

Putting aside the pricing and packaging aspects, customers can indeed evaluate vSphere w/K8s using one of the following two options below:

Option 1: 60 Day Eval

Sign up for the vSphere 7.0 (ESXi & VCSA) evaluation (https://my.vmware.com/en/web/vmware/evalcenter?p=vsphere-eval-7) and NSX-T 3.0 evaluation (https://my.vmware.com/web/vmware/evalcenter?p=nsx-t-eval). After signing up you will receive evaluation keys that can be used when setting up vSphere w/K8s. If you want to quickly go from 0 to Kubernetes, be sure to check out my vSphere with K8s Automation Lab Deployment which can give you a running environment in under 30min!

Option 2: 365 Day Eval

Sign up for VMUG Advantage which includes VMUGEval that provides licenses for vSphere 7.0, NSX-T 3.0, VCF 4.0 and many other VMware products for an entire year for non-production usage. After signing up you will receive license keys that will be valid for 1 year which can then be used when setting up vSphere w/K8s. With VMUG Advantage, you can consume vSphere w/K8s the "manual" method, using my vSphere with K8s Automation Lab Deployment or using SDDC Manager which is part of VCF 4.0 to automatically deployed the required SDDC infrastructure so that can then enable vSphere w/K8s.

Here is a screenshot of my vSphere w/K8s environment which was deployed using my Sphere with K8s Automation Lab Deployment script and using the evaluation keys which I had just signed up!

Option 3: Infinite Day Eval

VMware Hands-on-Lab is another great option which is completely free and you only need a web browser! You can check out HOL-2113-01-SDC for more details.

Filed Under: Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7, vSphere with Kubernetes

Admin account for embedded Harbor registry in vSphere with Kubernetes

06/09/2020 by William Lam 2 Comments

After setting up a vSphere with Kubernetes Cluster, customers have the option of enabling a built-in private container registry that can be used with the Supervisor Cluster. This private container registry uses the popular Opensource Harbor solution which is also a Cloud Native Computing Foundation (CNCF) project.


Although this is a convenient capability, one thing to be aware of is that the embedded Harbor registry is limited in functionality compared to a standalone Harbor deployment and this is by design. When logging into Harbor with your vCenter SSO user, you will be able to do perform basic operations such as pushing and pulling images from this registry. For customers that require additional functionality from Harbor, it is recommended that you setup an external Harbor instance which can also be used as a common registry for both the Supervisor Cluster as well any Tanzu Kubernetes Grid (TKG) Clusters that you may provision.

With that said, I have heard from a few folks who were interested in accessing the Harbor UI using the "admin" account, mostly from an exploration standpoint. The admin credentials for Harbor are dynamically generated each time the service is enabled and it is stored as a K8s secret within the Supervisor Cluster. This means the admin password is unique for each environment and the instructions below will show you how to obtain the credentials.

UPDATE (12/16/20) - I was informed by Engineering the ability to read K8s secrets was actually a bug and this has since been fixed in the latest release of vSphere with Tanzu. If you need the harbor credentials, you will need to directly login to the Supervisor Cluster from the VCSA (instructions have been updated below) to retrieve this information.

Disclaimer: This is not officially supported by VMware and the behaviors described below could change in the future without notice.

[Read more...] about Admin account for embedded Harbor registry in vSphere with Kubernetes

Filed Under: Cloud Native, VMware Tanzu, vSphere 7.0 Tagged With: Harbor, vSphere with Kubernetes

Setup custom login banner when logging into a vSphere with Kubernetes Cluster

05/20/2020 by William Lam Leave a Comment

While working on my PowerCLI module for enabling workload management for a vSphere with Kubernetes (K8s) Cluster, I came to discover a pretty cool feature that is only available when using the vSphere with K8s API to enable Workload Management on a vSphere Cluster.

As part of the enablement spec, there is a new property called login_banner. Taking a closer look, this property allows you to specify a custom message that would be displayed as part of the initial login to your vSphere with K8s Cluster using the vSphere kubectl plugin. This is similar to an SSH login banner which can be used to provide internal disclaimers and/or additional instructions for your end users.

Here is an example of what the login banner can look like. Yup, vSphere with K8s supports emojis or rather the terminal you are using to login can potentially render emojis 😀


The good news is that I have already added this feature into the new New-WorkloadManagement function and you can specify a message by adding the -LoginBanner parameter.

For those interested in rendering emojis within their banner, you can take a look at the following example and you can find the complete list of emoji unicodes here.

PowerShell
1
2
3
4
5
$LoginBanner = "
 
" + [char]::ConvertFromUtf32(0x1F973) + "vSphere with Kubernetes Cluster enabled by virtuallyGhetto " + [char]::ConvertFromUtf32(0x1F973) + "
 
"

Filed Under: Automation, Cloud Native, Kubernetes, vSphere 7.0 Tagged With: kubectl, Kubernetes, vSphere 7, vSphere with Kubernetes

Workload Management PowerCLI Module for automating vSphere with Kubernetes

05/19/2020 by William Lam 2 Comments

One of the last things on my to-do list after creating my Automated vSphere 7 and vSphere with Kubernetes Lab Deployment Script which is still the quickest and most reliable way to have a fully deployed and configured environment to try out vSphere with Kubernetes using Nested ESXi, was to also automate the enablement of Workload Management for a given vSphere Cluster.

There are two new vCenter Server REST APIs to be aware of as it pertains to vSphere with Kubernetes:

  • namespaces = Manages the lifecycle and access control to a vSphere Namespace
  • namespace-management = Despite the name, this refers to lifecycle and management of a Workload Management Cluster

I also have to mention that Vikas Shitole, who works on vCenter Server, has fantastic blog series covering various parts of the new vSphere with Kubernetes API along with Python examples if you want to dive further. Since Vikas has done a great job covering Python, I figure I will demonstrate how to consume these new vSphere with Kubernetes API using PowerCLI, which many of our customers use to automate.

I have created a new WorkloadManagement.psm1 PowerCLI module which includes following functions:

  • Get-WorkloadManagement
  • New-WorkloadManagement
  • Remove-WorkloadManagement

Below are the two steps required to get started with the Workload Management PowerCLI Module.

Step 1 - Install the WorkloadManagement PowerCLI Module by running the following command:

Install-Module VMware.WorkloadManagement.psm1

Step 2 - A connection to the vCenter REST API endpoint using the Connect-CisServer cmdlet is required for enabling and disabling Workload Management Cluster

Connect-CisServer -Server pacific-vcsa-2.cpbu.corp -User *protected email* -Password VMware1!

A connection to vCenter Server using Connect-VIServer cmdlet is only required if you wish to retrieve information about an existing Workload Management Cluster

Connect-VIServer -Server pacific-vcsa-2.cpbu.corp -User *protected email* -Password VMware1!

[Read more...] about Workload Management PowerCLI Module for automating vSphere with Kubernetes

Filed Under: Automation, PowerCLI, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7, vSphere with Kubernetes, Workload Management

Troubleshooting tips for configuring vSphere with Kubernetes

05/05/2020 by William Lam 9 Comments

With more and more folks trying out the new vSphere with Kubernetes capability, I have seen an uptick in questions both internally and externally around the initial setup of the infrastructure required for vSphere with Kubernetes but also during the configuration of a vSphere Cluster for Workload Management.

One of the most common question is why are there no vSphere Clusters listed or why a specific vSphere Cluster is showing up as Incompatible? There are a number of reasons that this can occur including vCenter Server not being able to communicate with NSX-T Manager to retrieve the list of NSX pre-checks which would cause the list to either be empty or listed as incompatible. Not having proper time sync between vCenter Server and NSX-T which can also manifest in a similar behavior among other infrastructure issues.


Having ran into some of these issues myself when developing my automation script, I figure it might be useful to share some of the troubleshooting tips I have used when trying to figure out what is going on whether that is during the initial setup or actually deploying workloads using vSphere with Kubernetes.

[Read more...] about Troubleshooting tips for configuring vSphere with Kubernetes

Filed Under: Kubernetes, vSphere 7.0 Tagged With: Kubernetes, vSphere 7, vSphere with Kubernetes

Deploying a minimal vSphere with Kubernetes environment

04/29/2020 by William Lam 9 Comments

A very useful property of automation is the ability to experiment. After creating my vSphere 7 with Kubernetes Automation Lab Deployment Script, I wanted to see what was the minimal footprint in terms of the physical resources but also the underlying components that would be required to allow me to still a fully functional vSphere with Kubernetes environment.

Before diving in, let me give you the usual disclaimer 😉

Disclaimer: This is not officially supported by VMware and you can potentially run into issues if you deviate from the official requirements which the default deployment script adheres to out of the box.

In terms of the physical resources, you will need a system that can provision up to 8 vCPU (this can be further reduced, see Additional Resource Reduction section below), 92GB memory and 1TB of storage (thin provisioned).


which translates to following configuration within the script:

  • 1 x Nested ESXi VM with 4 vCPU and 36GB memory
  • 1 x VCSA with 2 vCPU and 12GB memory
  • 1 x NSX-T Unified Appliance with 4 vCPU and 12GB memory
  • 1 x NSX-T Edge with 8 vCPU and 12GB memory

Note: You can probably reduce memory footprint of the ESXi VM further depending on your usage and the VCSA is using the default values for "Tiny", so you can probably trim the memory down a bit more.

Another benefit to this solution is by reducing the number of ESXi VMs required, it also speeds up the deployment and in just 35 minutes, you can have the complete infrastructure fully stood up and configured to try out vSphere with Kubernetes!


The other trick that I leveraged to reduce the amount of resources is by changing the default number of Supervisor Control Plane VMs required for enabling vSphere with Kubernetes. By default, three of these VMs are deployed as part of setting up the Supervisor Cluster, however I found a way to tell the Workload Control Plane (WCP) to only deploy two 🙂


This minimal deployment of vSphere with Kubernetes has already been incorporated into my vSphere with Kubernetes deployment script, but it does require altering several specific settings. You can find the instructions below.

[Read more...] about Deploying a minimal vSphere with Kubernetes environment

Filed Under: Automation, Kubernetes, Not Supported, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7, vSphere with Kubernetes

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 10
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy