• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Kubernetes

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11/04/2020 by William Lam 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#@ load("@ytt:overlay", "overlay")
 
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd
 
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

Custom Virtual Machine Class Types with vSphere with Tanzu

10/30/2020 by William Lam Leave a Comment

When you deploy a Tanzu Kubernetes Grid (TKG) Cluster using the integrated TKG Service in vSphere with Tanzu, you can specify a Virtual Machine Class Type which determines the amount of CPU and Memory resources that are allocated for both the Control Plane and/or Worker Node VMs for your TKG Cluster.

Here is a sample YAML specification that uses the best-effort-xsmall VM class type for both Control Plane and Worker Node, but you can certainly override and choose different classes based on your requirements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: william-tkc-01
  namespace: primp-industries
spec:
  distribution:
    version: v1.17.8+vmware.1-tkg.1.5417466
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 193.0.2.0/16
      serviceDomain: managedcluster.local
      services:
        cidrBlocks:
        - 195.51.100.0/12
  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 1
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

Today, the are a total of 16 VM Class types that you can select from, however these are not customizable which is something that has been coming up more recently. The vSphere with Tanzu team is aware of this request and is working on a solution that not only makes customizing CPU and Memory easier but also supporting storage customization. As you can see from the table below, 16GB is only supported configuration today.


In the mean time, if you need a supported path for customizing your TKG Guest Clusters, one option is to use the TKG Standalone / MultiCloud CLI, which can be used with a vSphere with Tanzu Cluster. You will need to deploy an additional TKG Management Cluster (basically a few VMs), but once you have that, you can override CPU, Memory and Storage of both the Control Plane and Worker Nodes using the following environment variables:

  • VSPHERE_WORKER_NUM_CPUS
  • VSPHERE_WORKER_MEM_MIB
  • VSPHERE_WORKER_DISK_GIB
  • VSPHERE_CONTROL_PLANE_NUM_CPUS
  • VSPHERE_CONTROL_PLANE_MEM_MIB
  • VSPHERE_CONTROL_PLANE_DISK_GIB

If you are interested, the easiest way to get started is by using my TKG Demo Appliance Fling which was just recently updated to the latest TKG 1.2 release which has support for K8s v1.19 which is currently not available on vSphere with Tanzu.

Now, you might ask, would it be possible to create your own custom VM class types using vSphere with Tanzu? Well .... keep reading to find out 🙂

Disclaimer: This is not officially supported by VMware, use at your own risk. These custom changes can potentially impact upgrades or automatically be reverted upon the next update or upgrade. You have been warned.

[Read more...] about Custom Virtual Machine Class Types with vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: vSphere with Tanzu

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

10/28/2020 by William Lam Leave a Comment

Happy to share that the Tanzu Kubernetes Grid (TKG) Demo Appliance Fling has been updated to support the latest TKG 1.2.0 release which just came out a couple of weeks ago. The TKG Workshop Guide has been updated to reflect all new TKG 1.2 changes along with an updated vSphere Content Library containing all the OVA required to get started. As mentioned in the workshop guide, you can use either a VMware Cloud on AWS SDDC (1-Node) or a vSphere 6.7 Update 3/vSphere 7.0+ environment.

The most notable change with this version is actually within TKG itself which now uses kube-vip to replace the functionality that the HAProxy VM used to provide. What this means when deploying either a TKG Management or Workload Cluster is that you will need to specify an IP Address which will be used for the Virtual IP endpoint of the K8s Cluster as shown in the screenshot below.

tkg init -i vsphere -p dev --name tkg-mgmt --vsphere-controlplane-endpoint-ip 192.168.2.10


Using the TKG Demo Appliance, you can deploy both v1.19.1 and v1.18.8 K8s Clusters. To exercise a TKG Cluster upgrade workflow, you just have to run these three simple commands:

export VSPHERE_TEMPLATE=photon-3-kube-v1.18.8_vmware.1
tkg create cluster tkg-cluster-01 --plan=dev --kubernetes-version=v1.18.8+vmware.1 --vsphere-controlplane-endpoint-ip 192.168.2.11
tkg upgrade cluster tkg-cluster-01


There has been a lot of demand for TKG on VMware Cloud on AWS, so that is where I have spent the bulk of my testing not to mention where it was originally developed. You can also deploy the TKG Demo Appliance in an on-premises vSphere environment running 6.7 Update 3 or newer.

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Cloud on AWS, VMware Tanzu, vSphere 6.7, vSphere 7.0 Tagged With: Tanzu Kubernetes Grid, VMware Cloud on AWS, vSphere 6.7, vSphere 7.0

Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

10/20/2020 by William Lam Leave a Comment

With previous releases of Tanzu Kubernetes Grid (TKG), if you needed to apply special OS customizations that were applied to the deployed Control Plane and Worker Node VMs, such as injecting commands to handle network proxy or dealing with insecure container registry, your only option was to hand edit the default TKG Dev/Prod YAML templates. Not only was this error prone but because the templates can change from each release, it was difficult to manage and test until you attempted a deployment.

One of the newest features with the release of TKG 1.2 is official support for customizing the Kubernetes (K8s) Cluster Templates Plans using YTT (YAML Templating Tooling) which allows users to provide custom data that can then be patched/overlay to an existing YAML file. YTT itself is part of a larger toolset for building, creating and configuring deployments for K8s called Carvel. The Domain Specific Language (DSL) that YTT uses was not exactly intuitive but since the official TKG documentation had an example to start with, I was able to mostly figure my way through along with some tips from the #carvel Slack channel.

So what was I trying to do? I was working on updating my TKG Demo Appliance Fling to the latest 1.2 release and part of the setup required adding an entry to /etc/hosts file on all TKG VMs that are deployed. Instead of directly messing with the YAML templates, there is now a new "overlay" YAML file in ~/.tkg/providers/infrastructure-vsphere/ytt/vsphere-overlay.yaml which can be used to make such changes.

[Read more...] about Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, ytt

Kubernetes on ESXi-Arm using k3s

10/16/2020 by William Lam 11 Comments

The tiny form factor of a Raspberry Pi (rPI) is a fantastic hardware platform to start playing with the ESXi-Arm Fling. You can already do a bunch of fun VMware things like running a lightweight vSAN Witness Node to setting up basic automation environment for PowerCLI, Terraform and Packer to running rPI OS as VM, enabling some neat use cases like consolidating your physical rPI assets which might be running RetroPi and Pi-Hole which many home labbers are doing.

In addition to VMware solutions, its is also a great platform to learn and tinker with new technologies like Kubernetes (K8s) which I am sure many of you have been hearing about 🙂 Although our vSphere with Tanzu and Tanzu Kubernetes Grid (TKG) does not currently work with the ESXi-Arm Fling, I have actually been meaning to try out a super lightweight K8s distribution designed for IoT/Edge called k3s (pronounced k-3-s) which also recently joined Cloud Native Computing Foundation (CNCF) Sandbox level.

k3s is supported on rPI and you normally would have multiple rPI devices to represent the number of nodes, for example if you want a basic 3-Node cluster, you would need three physical rPI devices. With ESXi-Arm, you can now create these nodes as VM, using just a single rPI. This opens up the door for all sorts of explorations, you can create HA cluster or try out more advanced features which might be more difficult if you needed several physical devices. If you mess up, you can simply re-deploy the VM without much pain or simply clone the VM.

In my setup, I am using 3 x Photon OS VMs. One for the primary node and two for k3s worker nodes. You can certainly install k3s on any other Arm-based OS including rPI OS (which can now run as a VM as mentioned earlier).


[Read more...] about Kubernetes on ESXi-Arm using k3s

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, Kubernetes Tagged With: Arm, esxi, k3s, Kubernetes

Automated vSphere with Tanzu Lab Deployment Script

10/13/2020 by William Lam 13 Comments

After sharing a sneak peak of my updated vSphere with Tanzu Automated Lab Deployment script on Twitter, I have been receiving non-stop requests on when the script will be available. It took a bit longer to finish off the documentation, creating the script was actually the easy part 😛

In any case, I am happy to finally share the automated script for deploying the new vSphere with Tanzu "Basic" which is included as part of vSphere 7.0 Update 1 is now available! You can find full details at the following Github repo: https://github.com/lamw/vsphere-with-tanzu-basic-automated-lab-deployment

In addition to the deployment instructions on the Github repo, I have also included a sample walkthrough which includes both deploying the vSphere with Tanzu environment as well as enabling Workload Management on the vSphere Cluster, which is not part of the automated deployment script.

I will also be updating my existing Workload Management PowerCLI Module to incorporate the new requirements for automating the enablement of Workload Management for a vSphere with Tanzu Basic Cluster. Together with this script, you will now have the ability to deploy vSphere with Tanzu end-to-end in under 1hr time!

More details will be shared in a later blog post and I hope folks enjoy the script, it was a ton of work!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7.0 Update 1, vSphere with Tanzu

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy