• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

VMware Tanzu

Automating kubectl-vsphere login for vSphere with Tanzu

11/12/2020 by William Lam 2 Comments

Before you can start deploying workloads to your vSphere with Tanzu Cluster, you need to first download the vSphere Plugin for Kubectl and then use that to login to your Supervisor Cluster which will generate a Kubernetes (K8s) context file that is stored in .kube/config

Here is an example of using the vSphere Plugin for Kubectl:

./kubectl-vsphere login --server=10.10.0.64 -u *protected email* --insecure-skip-tls-verify


For interactive sessions this is fine and upon successfully entering your password when prompted, you can switch to the correct K8s context to begin your workload deployment. For folks interested in automation, the one downside today is that the plugin does not provide a way to specify your password using either a command-line argument or reading from a configuration file.

I have actually seen this topic come up a few times both internally and externally for those wanting to automate the end to end deployment of a Tanzu Kubernetes Grid (TKG) Cluster and have gotten stuck on trying to figure a way around having to perform this required manual step.

[Read more...] about Automating kubectl-vsphere login for vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: expect, kubectl, vSphere with Tanzu

Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

11/10/2020 by William Lam 2 Comments

A few months back I saw that HashiCorp had released a new Kubernetes (K8s) Provider for Terraform, currently in Alpha state, which enable users to deploy K8s resources using the popular Infrastructure-as-Code (IaC) tool. I thought this would be pretty cool if it works with our vSphere with Tanzu solution, since the Tanzu Kubernetes Grid (TKG) Service uses ClusterAPI via a custom VM Operator to deploy TKG Guest Clusters which is just a fancy way of saying it uses K8s API to deploy more K8s 🙂

The setting up the new K8s provider was pretty straight forward and after spending a few minutes in figuring out how to convert my existing TKG YAML to the required HCL format for Terraform to understand, I was able to to run a terraform "plan" but quickly ran into the following error:

failed: admission webhook "default.mutating.tanzukubernetescluster.run.tanzu.vmware.com" does not support dry run

It looks like our tanzukubernetescluster admission webhooks does not currently support dry run operations which can be quite useful but also common when using Terraform. I figured this was the end of that idea and I ended up just filing a feature enhancement internally for adding this support in the future as I can see this being quite useful for our customers.

After finishing up recent pet project of getting a fully functional vSphere with Tanzu on a homelab budget and just using 32GB of memory, I decided to take another look at this and discovered the required tweak to get this working was super trivial, literally a single line change.

Disclaimer: This is not officially supported by VMware, use at your own risk.

[Read more...] about Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Kubernetes, Tanzu Kubernetes Grid, Terraform, vSphere with Tanzu

Complete vSphere with Tanzu homelab with just 32GB of memory!

11/09/2020 by William Lam 30 Comments

Since the release of vSphere 7.0 Update 1, the demand and interests from the community on getting hands on with vSphere with Tanzu and the new simplified networking solution, has been non-stop. Most folks are either upgrading their existing homelab or looking to purchase new hardware that can better support the new features of the vSphere 7.0 release.

Although vSphere with Tanzu now has a flavor that does not require NSX-T which helps reduces the barrier on getting started, it still has some networking requirements which may not be easily met in for all lab environments. In fact, this was actually the primary reason I had started to look into this since my personal homelab network is very basic and I do not have nor want a switch that can support multiple VLANs, which is one of the requirements for vSphere with Tanzu.

While investigating for a potential solution, which included way too MANY hours of debugging and troubleshooting, I also thought about the absolute minimal amount of resources I could get away with after put everything together. To be clear, my homelab is comprised of a single Supermicro E200-8D which has 128GB of memory and that has served me well over the years and I highly recommend it for anyone that can fit that into their budget. With that said, I did set out with a pretty aggressive goal of using something that is pretty common in VMware homelabs which is an Intel NUC and with just 32GB of memory.

Here is the hardware BOM (similar hardware should also work):

  • Intel NUC 10i7FNH
  • 32GB memory
  • Single 250GB M.2 NVMe SSD
    • NUC can support two SSD (M.2 + SATA), you can always go larger

Here is the software BOM:

  • vCenter Server Appliance 7.0 Update 1 Build 16860138
  • ESXi 7.0 Update 1 Build 16850804
  • HAProxy v0.1.8 OVA
  • Photon OS 3.0 OVA

Note: The Intel NUCs (Gen 6 to 10) can all support up to 64GB of memory and this is one of the best upgrades you can give yourself, but if you only have 32GB of memory, this will also work.

The final solution will comprise of the following:

  • 1 x vCenter Server Appliance (VCSA) running on the Intel NUC self-managing the ESXi host
  • VMFS storage will be used instead of vSAN to reduce memory footprint (If you have 64GB of memory, recommend using vSAN)
  • Onboard NIC will be used for all traffic and will be attached to a Distributed Virtual Switch (VDS)
    • 3 x Distributed Portgroups will be configured on top of your existing LAN network, the latter two will be routed through our Photon OS Router VM
      • Management - Existing LAN network
      • Frontend - 10.10.0/24
      • Workload - 10.20.0.0/24
  • 1 x vSphere with Tanzu Cluster enabled with Workload Management
  • 1 x HAProxy VM deployed using 3-NIC configuration
  • 1 x Photon OS Linux VM used as a Router for IP forwarding and optionally, a DNS server if you do not already have one
  • 9 x IP Addresses in total will be required from your local LAN network
    • 4 x IP Addresses which should map to following hostnames or similiar
      • esxi-01.tanzu.local
      • vcsa.tanzu.local
      • router.tanzu.local
      • haproxy.tanzu.local
    • 5 x IP Addresses in a consecutive block (e.g. 192.168.30.20-192.168.30.25) will be needed for the Supervisor Control Plane VMs


As part of this solution, I have automated as much of the tasks as possible and all scripts used for this solution can be found at https://github.com/lamw/vsphere-with-tanzu-homelab-scripts which I will be referencing throughout the instructions. There are also a number of techniques and tricks I am using to be able to reduce the overall memory footprint for setting up vSphere with Tanzu, obviously these should not be used in a Production grade environment.

I also want to give a huge thanks to Timo Sugliani for all of his help with the networking question/challenges and Mayank B. from the vSphere with Tanzu Engineering team who helped with the debugging and ultimately making this solution a possibility.
[Read more...] about Complete vSphere with Tanzu homelab with just 32GB of memory!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Home Lab, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: HAProxy, Intel NUC, Kubernetes, vSphere with Tanzu

Quick Tip – Rebooting VCSA causes vSphere with Tanzu to show ESXi hosts not licensed for Workload Management

11/08/2020 by William Lam 5 Comments

I had just setup a new vSphere with Tanzu environment running on my Intel NUC for an upcoming blog post and after rebooting the vCenter Server Appliance (VCSA), I had noticed the Workload Management UI threw the following licensing error:

None of the hosts connected to this vCenter are licensed for Workload Management.

This was quite strange since both the ESXi host and VCSA was just installed less than a day ago which I was using the default 60 day evaluation that is automatically built in.

The even weirder thing was that I was still able perform operations using the Workload Management APIs, so I figured this must be a vSphere UI bug but could not find a way to get the UI to display. After reaching out to some folks internally, a suggestion was given on using either incognito mode or another browser and to my surprise, that fixed the problem! I suspect there is some cookie that was set during the initial Workload Management enablement when going through the evaluation workflow which now causes this unexpected early check for licensing.

I have already filed an internal bug but if you do hit this problem, simply clear your cookies for the the VCSA and the Workload Management UI will not properly display again.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VMware Tanzu Tagged With: vSphere with Tanzu

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11/04/2020 by William Lam 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#@ load("@ytt:overlay", "overlay")
 
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd
 
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

11/02/2020 by William Lam Leave a Comment

When deploying the HAProxy VM as part of vSphere with Tanzu, customers have the option of deploying the HAProxy VM using either a 2-NIC or 3-NIC configuration. The default OVF Deployment Option is the 2-NIC design called "Default" and the 3-NIC design is called "Frontend".

From an Automation point of view, you can use either OVFTool or PowerCLI to automate the deployment. For a 2-NIC example, you can refer to my Automated vSphere with Tanzu Lab Deployment Script. However, for the 3-NIC example, a few folks were running into some issues when using PowerCLI for the automation.

The main issue is that because the default OVF Deployment Option is the 2-NIC design (Default), the two additional OVF properties frontend_ip and frontend_gateway is basically hidden when processing the OVF properties when PowerCLI.

Note: You can view these optional properties by running the following OVFTool command: ovftool --X:enableHiddenProperties vmware-haproxy-v0.1.8.ova


Even if you specified the "Frontend" OVF Deployment Option, PowerCLI does not seem to have the logic to retrieve the other optional parameters and hence can not be set as part of the initial deployment.

[Read more...] about Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VMware Tanzu Tagged With: HAProxy, PowerCLI, vSphere with Tanzu

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy