• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

VMware Tanzu

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11/04/2020 by William Lam 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#@ load("@ytt:overlay", "overlay")
 
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd
 
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

11/02/2020 by William Lam Leave a Comment

When deploying the HAProxy VM as part of vSphere with Tanzu, customers have the option of deploying the HAProxy VM using either a 2-NIC or 3-NIC configuration. The default OVF Deployment Option is the 2-NIC design called "Default" and the 3-NIC design is called "Frontend".

From an Automation point of view, you can use either OVFTool or PowerCLI to automate the deployment. For a 2-NIC example, you can refer to my Automated vSphere with Tanzu Lab Deployment Script. However, for the 3-NIC example, a few folks were running into some issues when using PowerCLI for the automation.

The main issue is that because the default OVF Deployment Option is the 2-NIC design (Default), the two additional OVF properties frontend_ip and frontend_gateway is basically hidden when processing the OVF properties when PowerCLI.

Note: You can view these optional properties by running the following OVFTool command: ovftool --X:enableHiddenProperties vmware-haproxy-v0.1.8.ova


Even if you specified the "Frontend" OVF Deployment Option, PowerCLI does not seem to have the logic to retrieve the other optional parameters and hence can not be set as part of the initial deployment.

[Read more...] about Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VMware Tanzu Tagged With: HAProxy, PowerCLI, vSphere with Tanzu

Custom Virtual Machine Class Types with vSphere with Tanzu

10/30/2020 by William Lam Leave a Comment

When you deploy a Tanzu Kubernetes Grid (TKG) Cluster using the integrated TKG Service in vSphere with Tanzu, you can specify a Virtual Machine Class Type which determines the amount of CPU and Memory resources that are allocated for both the Control Plane and/or Worker Node VMs for your TKG Cluster.

Here is a sample YAML specification that uses the best-effort-xsmall VM class type for both Control Plane and Worker Node, but you can certainly override and choose different classes based on your requirements.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
  name: william-tkc-01
  namespace: primp-industries
spec:
  distribution:
    version: v1.17.8+vmware.1-tkg.1.5417466
  settings:
    network:
      cni:
        name: antrea
      pods:
        cidrBlocks:
        - 193.0.2.0/16
      serviceDomain: managedcluster.local
      services:
        cidrBlocks:
        - 195.51.100.0/12
  topology:
    controlPlane:
      class: best-effort-xsmall
      count: 1
      storageClass: vsan-default-storage-policy
    workers:
      class: best-effort-xsmall
      count: 3
      storageClass: vsan-default-storage-policy

Today, the are a total of 16 VM Class types that you can select from, however these are not customizable which is something that has been coming up more recently. The vSphere with Tanzu team is aware of this request and is working on a solution that not only makes customizing CPU and Memory easier but also supporting storage customization. As you can see from the table below, 16GB is only supported configuration today.


In the mean time, if you need a supported path for customizing your TKG Guest Clusters, one option is to use the TKG Standalone / MultiCloud CLI, which can be used with a vSphere with Tanzu Cluster. You will need to deploy an additional TKG Management Cluster (basically a few VMs), but once you have that, you can override CPU, Memory and Storage of both the Control Plane and Worker Nodes using the following environment variables:

  • VSPHERE_WORKER_NUM_CPUS
  • VSPHERE_WORKER_MEM_MIB
  • VSPHERE_WORKER_DISK_GIB
  • VSPHERE_CONTROL_PLANE_NUM_CPUS
  • VSPHERE_CONTROL_PLANE_MEM_MIB
  • VSPHERE_CONTROL_PLANE_DISK_GIB

If you are interested, the easiest way to get started is by using my TKG Demo Appliance Fling which was just recently updated to the latest TKG 1.2 release which has support for K8s v1.19 which is currently not available on vSphere with Tanzu.

Now, you might ask, would it be possible to create your own custom VM class types using vSphere with Tanzu? Well .... keep reading to find out 🙂

Disclaimer: This is not officially supported by VMware, use at your own risk. These custom changes can potentially impact upgrades or automatically be reverted upon the next update or upgrade. You have been warned.

[Read more...] about Custom Virtual Machine Class Types with vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu Tagged With: vSphere with Tanzu

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

10/28/2020 by William Lam Leave a Comment

Happy to share that the Tanzu Kubernetes Grid (TKG) Demo Appliance Fling has been updated to support the latest TKG 1.2.0 release which just came out a couple of weeks ago. The TKG Workshop Guide has been updated to reflect all new TKG 1.2 changes along with an updated vSphere Content Library containing all the OVA required to get started. As mentioned in the workshop guide, you can use either a VMware Cloud on AWS SDDC (1-Node) or a vSphere 6.7 Update 3/vSphere 7.0+ environment.

The most notable change with this version is actually within TKG itself which now uses kube-vip to replace the functionality that the HAProxy VM used to provide. What this means when deploying either a TKG Management or Workload Cluster is that you will need to specify an IP Address which will be used for the Virtual IP endpoint of the K8s Cluster as shown in the screenshot below.

tkg init -i vsphere -p dev --name tkg-mgmt --vsphere-controlplane-endpoint-ip 192.168.2.10


Using the TKG Demo Appliance, you can deploy both v1.19.1 and v1.18.8 K8s Clusters. To exercise a TKG Cluster upgrade workflow, you just have to run these three simple commands:

export VSPHERE_TEMPLATE=photon-3-kube-v1.18.8_vmware.1
tkg create cluster tkg-cluster-01 --plan=dev --kubernetes-version=v1.18.8+vmware.1 --vsphere-controlplane-endpoint-ip 192.168.2.11
tkg upgrade cluster tkg-cluster-01


There has been a lot of demand for TKG on VMware Cloud on AWS, so that is where I have spent the bulk of my testing not to mention where it was originally developed. You can also deploy the TKG Demo Appliance in an on-premises vSphere environment running 6.7 Update 3 or newer.

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance 1.2.0

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Cloud on AWS, VMware Tanzu, vSphere 6.7, vSphere 7.0 Tagged With: Tanzu Kubernetes Grid, VMware Cloud on AWS, vSphere 6.7, vSphere 7.0

Automating Workload Management on vSphere with Tanzu

10/20/2020 by William Lam 6 Comments

As promised, here is the complimentary solution to my existing Automated vSphere with Tanzu Lab Deployment Script, which will automatically deploy and configure the required infrastructure (vCenter Server Appliance, ESXi, vSAN and HAProxy VMs) so that you can quickly jump to enabling Workload Management on your vSphere Cluster.

FYI: Ben Corrie, one of the Engineers on the vSphere with Tanzu team recently published a vSphere with Tanzu 4-Part Deep Dive video series where he walks you through in deploying everything from scratch along with the concepts that should help you better understand how vSphere with Tanzu works. He is actually doing this in his own personal homelab and thought this might be useful to share with others. Kudos Ben and highly recommend folks check out his video if you new to vSphere with Tanzu and Kubernetes.


Enabling Workload Management is a manual step after the automated deployment script and as you know, I prefer to automate as much as I can. I have updated my existing PowerCLI Workload Management Module to now also support the new vSphere with Tanzu capability using HAProxy for networking instead of NSX-T. The module can be downloaded from PowerShell Gallery by simply running

[Read more...] about Automating Workload Management on vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VMware Tanzu Tagged With: PowerCLI, vSphere with Tanzu, Workload Management

Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

10/20/2020 by William Lam Leave a Comment

With previous releases of Tanzu Kubernetes Grid (TKG), if you needed to apply special OS customizations that were applied to the deployed Control Plane and Worker Node VMs, such as injecting commands to handle network proxy or dealing with insecure container registry, your only option was to hand edit the default TKG Dev/Prod YAML templates. Not only was this error prone but because the templates can change from each release, it was difficult to manage and test until you attempted a deployment.

One of the newest features with the release of TKG 1.2 is official support for customizing the Kubernetes (K8s) Cluster Templates Plans using YTT (YAML Templating Tooling) which allows users to provide custom data that can then be patched/overlay to an existing YAML file. YTT itself is part of a larger toolset for building, creating and configuring deployments for K8s called Carvel. The Domain Specific Language (DSL) that YTT uses was not exactly intuitive but since the official TKG documentation had an example to start with, I was able to mostly figure my way through along with some tips from the #carvel Slack channel.

So what was I trying to do? I was working on updating my TKG Demo Appliance Fling to the latest 1.2 release and part of the setup required adding an entry to /etc/hosts file on all TKG VMs that are deployed. Instead of directly messing with the YAML templates, there is now a new "overlay" YAML file in ~/.tkg/providers/infrastructure-vsphere/ytt/vsphere-overlay.yaml which can be used to make such changes.

[Read more...] about Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, ytt

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy