• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Kubernetes

How to deploy Knative to a Tanzu Kubernetes Grid (TKG) Cluster on both vSphere with Tanzu and TKG Multi-Cloud?

11/23/2020 by William Lam Leave a Comment

This weekend I spent some time installing Knative, which is an open source framework that is built on top of Kubernetes. Knative is actually made up of two core components, serving and eventing. This quote from Ram Gopinathan, Principal Technology Architect, T-Mobile really sums up Knative quite nicely:

Knative helps our developers focus on building the business logic rather than worrying about building low-level platform capabilities such as build, deploy, autoscaling, monitoring, and observability.

There are a number of tutorials online for setting up Knative, most of which using Kubernetes in Docker (KinD) for easy local development. Since I have been spending quite a bit of time lately with both our vSphere with Tanzu and Tanzu Kubernetes Grid (TKG) Multi-Cloud solution, which both support deploying conformant and production grade Kubernetes (K8s) Clusters called a TKG Guest Cluster, I figure I might as well learn how to install Knative using these infrastructures.

The instructions below will be focus on deploying the Knative serving components. Once you have that setup, it is easy to deploy the eventing components which you can follow the official Knative documentation.

[Read more...] about How to deploy Knative to a Tanzu Kubernetes Grid (TKG) Cluster on both vSphere with Tanzu and TKG Multi-Cloud?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Cloud Native, Kubernetes, VMware Tanzu Tagged With: Knative, Kubernetes, Tanzu Kubernetes Grid, vSphere with Tanzu

Automating kubectl-vsphere login for vSphere with Tanzu

11/12/2020 by William Lam 2 Comments

Before you can start deploying workloads to your vSphere with Tanzu Cluster, you need to first download the vSphere Plugin for Kubectl and then use that to login to your Supervisor Cluster which will generate a Kubernetes (K8s) context file that is stored in .kube/config

Here is an example of using the vSphere Plugin for Kubectl:

./kubectl-vsphere login --server=10.10.0.64 -u *protected email* --insecure-skip-tls-verify


For interactive sessions this is fine and upon successfully entering your password when prompted, you can switch to the correct K8s context to begin your workload deployment. For folks interested in automation, the one downside today is that the plugin does not provide a way to specify your password using either a command-line argument or reading from a configuration file.

I have actually seen this topic come up a few times both internally and externally for those wanting to automate the end to end deployment of a Tanzu Kubernetes Grid (TKG) Cluster and have gotten stuck on trying to figure a way around having to perform this required manual step.

[Read more...] about Automating kubectl-vsphere login for vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: expect, kubectl, vSphere with Tanzu

Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

11/10/2020 by William Lam 2 Comments

A few months back I saw that HashiCorp had released a new Kubernetes (K8s) Provider for Terraform, currently in Alpha state, which enable users to deploy K8s resources using the popular Infrastructure-as-Code (IaC) tool. I thought this would be pretty cool if it works with our vSphere with Tanzu solution, since the Tanzu Kubernetes Grid (TKG) Service uses ClusterAPI via a custom VM Operator to deploy TKG Guest Clusters which is just a fancy way of saying it uses K8s API to deploy more K8s 🙂

The setting up the new K8s provider was pretty straight forward and after spending a few minutes in figuring out how to convert my existing TKG YAML to the required HCL format for Terraform to understand, I was able to to run a terraform "plan" but quickly ran into the following error:

failed: admission webhook "default.mutating.tanzukubernetescluster.run.tanzu.vmware.com" does not support dry run

It looks like our tanzukubernetescluster admission webhooks does not currently support dry run operations which can be quite useful but also common when using Terraform. I figured this was the end of that idea and I ended up just filing a feature enhancement internally for adding this support in the future as I can see this being quite useful for our customers.

After finishing up recent pet project of getting a fully functional vSphere with Tanzu on a homelab budget and just using 32GB of memory, I decided to take another look at this and discovered the required tweak to get this working was super trivial, literally a single line change.

Disclaimer: This is not officially supported by VMware, use at your own risk.

[Read more...] about Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Kubernetes, Tanzu Kubernetes Grid, Terraform, vSphere with Tanzu

Complete vSphere with Tanzu homelab with just 32GB of memory!

11/09/2020 by William Lam 30 Comments

Since the release of vSphere 7.0 Update 1, the demand and interests from the community on getting hands on with vSphere with Tanzu and the new simplified networking solution, has been non-stop. Most folks are either upgrading their existing homelab or looking to purchase new hardware that can better support the new features of the vSphere 7.0 release.

Although vSphere with Tanzu now has a flavor that does not require NSX-T which helps reduces the barrier on getting started, it still has some networking requirements which may not be easily met in for all lab environments. In fact, this was actually the primary reason I had started to look into this since my personal homelab network is very basic and I do not have nor want a switch that can support multiple VLANs, which is one of the requirements for vSphere with Tanzu.

While investigating for a potential solution, which included way too MANY hours of debugging and troubleshooting, I also thought about the absolute minimal amount of resources I could get away with after put everything together. To be clear, my homelab is comprised of a single Supermicro E200-8D which has 128GB of memory and that has served me well over the years and I highly recommend it for anyone that can fit that into their budget. With that said, I did set out with a pretty aggressive goal of using something that is pretty common in VMware homelabs which is an Intel NUC and with just 32GB of memory.

Here is the hardware BOM (similar hardware should also work):

  • Intel NUC 10i7FNH
  • 32GB memory
  • Single 250GB M.2 NVMe SSD
    • NUC can support two SSD (M.2 + SATA), you can always go larger

Here is the software BOM:

  • vCenter Server Appliance 7.0 Update 1 Build 16860138
  • ESXi 7.0 Update 1 Build 16850804
  • HAProxy v0.1.8 OVA
  • Photon OS 3.0 OVA

Note: The Intel NUCs (Gen 6 to 10) can all support up to 64GB of memory and this is one of the best upgrades you can give yourself, but if you only have 32GB of memory, this will also work.

The final solution will comprise of the following:

  • 1 x vCenter Server Appliance (VCSA) running on the Intel NUC self-managing the ESXi host
  • VMFS storage will be used instead of vSAN to reduce memory footprint (If you have 64GB of memory, recommend using vSAN)
  • Onboard NIC will be used for all traffic and will be attached to a Distributed Virtual Switch (VDS)
    • 3 x Distributed Portgroups will be configured on top of your existing LAN network, the latter two will be routed through our Photon OS Router VM
      • Management - Existing LAN network
      • Frontend - 10.10.0/24
      • Workload - 10.20.0.0/24
  • 1 x vSphere with Tanzu Cluster enabled with Workload Management
  • 1 x HAProxy VM deployed using 3-NIC configuration
  • 1 x Photon OS Linux VM used as a Router for IP forwarding and optionally, a DNS server if you do not already have one
  • 9 x IP Addresses in total will be required from your local LAN network
    • 4 x IP Addresses which should map to following hostnames or similiar
      • esxi-01.tanzu.local
      • vcsa.tanzu.local
      • router.tanzu.local
      • haproxy.tanzu.local
    • 5 x IP Addresses in a consecutive block (e.g. 192.168.30.20-192.168.30.25) will be needed for the Supervisor Control Plane VMs


As part of this solution, I have automated as much of the tasks as possible and all scripts used for this solution can be found at https://github.com/lamw/vsphere-with-tanzu-homelab-scripts which I will be referencing throughout the instructions. There are also a number of techniques and tricks I am using to be able to reduce the overall memory footprint for setting up vSphere with Tanzu, obviously these should not be used in a Production grade environment.

I also want to give a huge thanks to Timo Sugliani for all of his help with the networking question/challenges and Mayank B. from the vSphere with Tanzu Engineering team who helped with the debugging and ultimately making this solution a possibility.
[Read more...] about Complete vSphere with Tanzu homelab with just 32GB of memory!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Home Lab, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: HAProxy, Intel NUC, Kubernetes, vSphere with Tanzu

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11/04/2020 by William Lam 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#@ load("@ytt:overlay", "overlay")
 
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd
 
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

Custom Virtual Machine Class Types with vSphere with Tanzu

10/30/2020 by William Lam 1 Comment

When you deploy a Tanzu Kubernetes Grid (TKG) Cluster using the integrated TKG Service in vSphere with Tanzu, you can specify a Virtual Machine Class Type which determines the amount of CPU and Memory resources that are allocated for both the Control Plane and/or Worker Node VMs for your TKG Cluster.

Here is a sample YAML specification that uses the best-effort-xsmall VM class type for both Control Plane and Worker Node, but you can certainly override and choose different classes based on your requirements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: william-tkc-01
  namespace: primp-industries
spec:
  distribution:
    version: v1.17.8+vmware.1-tkg.1.5417466
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 193.0.2.0/16
      serviceDomain: managedcluster.local
      services:
        cidrBlocks:
        - 195.51.100.0/12
  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 1
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

Today, the are a total of 16 VM Class types that you can select from, however these are not customizable which is something that has been coming up more recently. The vSphere with Tanzu team is aware of this request and is working on a solution that not only makes customizing CPU and Memory easier but also supporting storage customization. As you can see from the table below, 16GB is only supported configuration today.


In the mean time, if you need a supported path for customizing your TKG Guest Clusters, one option is to use the TKG Standalone / MultiCloud CLI, which can be used with a vSphere with Tanzu Cluster. You will need to deploy an additional TKG Management Cluster (basically a few VMs), but once you have that, you can override CPU, Memory and Storage of both the Control Plane and Worker Nodes using the following environment variables:

  • VSPHERE_WORKER_NUM_CPUS
  • VSPHERE_WORKER_MEM_MIB
  • VSPHERE_WORKER_DISK_GIB
  • VSPHERE_CONTROL_PLANE_NUM_CPUS
  • VSPHERE_CONTROL_PLANE_MEM_MIB
  • VSPHERE_CONTROL_PLANE_DISK_GIB

If you are interested, the easiest way to get started is by using my TKG Demo Appliance Fling which was just recently updated to the latest TKG 1.2 release which has support for K8s v1.19 which is currently not available on vSphere with Tanzu.

Now, you might ask, would it be possible to create your own custom VM class types using vSphere with Tanzu? Well .... keep reading to find out 🙂

Disclaimer: This is not officially supported by VMware, use at your own risk. These custom changes can potentially impact upgrades or automatically be reverted upon the next update or upgrade. You have been warned.

[Read more...] about Custom Virtual Machine Class Types with vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: vSphere with Tanzu

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 9
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy