• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Tanzu Kubernetes Grid

Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

10/20/2020 by William Lam Leave a Comment

With previous releases of Tanzu Kubernetes Grid (TKG), if you needed to apply special OS customizations that were applied to the deployed Control Plane and Worker Node VMs, such as injecting commands to handle network proxy or dealing with insecure container registry, your only option was to hand edit the default TKG Dev/Prod YAML templates. Not only was this error prone but because the templates can change from each release, it was difficult to manage and test until you attempted a deployment.

One of the newest features with the release of TKG 1.2 is official support for customizing the Kubernetes (K8s) Cluster Templates Plans using YTT (YAML Templating Tooling) which allows users to provide custom data that can then be patched/overlay to an existing YAML file. YTT itself is part of a larger toolset for building, creating and configuring deployments for K8s called Carvel. The Domain Specific Language (DSL) that YTT uses was not exactly intuitive but since the official TKG documentation had an example to start with, I was able to mostly figure my way through along with some tips from the #carvel Slack channel.

So what was I trying to do? I was working on updating my TKG Demo Appliance Fling to the latest 1.2 release and part of the setup required adding an entry to /etc/hosts file on all TKG VMs that are deployed. Instead of directly messing with the YAML templates, there is now a new "overlay" YAML file in ~/.tkg/providers/infrastructure-vsphere/ytt/vsphere-overlay.yaml which can be used to make such changes.

[Read more...] about Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, ytt

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10/10/2020 by William Lam 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere with Tanzu

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3

08/10/2020 by William Lam 1 Comment

It has been awhile since I have updated my Tanzu Kubernetes Grid (TKG) Demo Appliance Fling, which is a virtual appliance that enables anyone to go from zero to Kubernetes in less than 30 minutes with just an SSH client and a web browser. For VMware Cloud on AWS customers interested in running TKG, this is a great way to quickly get started on a proof of concept, demo or for development and testing purposes. One great benefit is that everything required for TKG is self contained within the appliance including an embedded Harbor registry and the respective TKG container images, great for air-gapped or non-internet accessible environments.

Here is a summary of what is new:

Support for latest TKG 1.1.3

There have been several of smaller releases to TKG since their 1.0.0 release but due to their short lifecycle, I decided to hold off. Behind the scenes, I have actually been working closely with TKG team on the latest TKG 1.1.3 release which was just release last week. One really cool feature that was introduced in TKG 1.1.2 is the ability to upgrade an existing TKG Workload Cluster to a newer version of Kubernetes.

With TKG 1.1.3, support for Kubernetes v1.18.6 and v1.17.9 is now possible and the latest version of the demo appliance will also support this workflow. In fact, I have also updated my TKG Workshop Guide to include all new updates including the upgrade workflow. To reduce the maintenance burden on myself, the TKG Demo Appliance 1.0.0 will be removed in the near future, for now it has been deprecated but all existing content is still available. I highly recommend checking out the latest version as you will get all the latest features of TKG.

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Cloud on AWS, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, VMware Cloud on AWS, VMware Tanzu

How to configure network proxy with Tanzu Kubernetes Grid (TKG)?

05/18/2020 by William Lam 2 Comments

Network Proxies are commonly used by customers to provide connectivity from internal servers/services to access external networks like the Internet in a controlled and secured manner. While working on a recent network proxy enhancement for our VMware Event Broker Appliance (VEBA) Fling, I had setup a Squid server which is a popular network proxy solution.

I had noticed a couple of folks were asking about network proxy configuration for Standalone Tanzu Kubernetes Grid (TKG) and figure this might be interesting to explore, especially for my recently released TKG Demo Appliance Fling which enables folks to quickly go from zero to Kubernetes in just 30 minutes! I figured this would be another good opportunity to learn a bit more about TKG as well as Kubernetes (K8s) and I jokingly said to myself, how hard could this be!? 😉 Apparently it was not trivial and took a bit of trial/error to figure out the correct combination and below is the procedure that can be followed for both standard deployment of TKG as well as the TKG Demo Appliance Fling.

Proxy Setting configurations for TKG CLI

The TKG CLI uses KinD (Kubernetes in Docker) under the hood to setup the initial K8s bootstrap cluster to deploy the TKG Management Cluster. If you have not already downloaded KinD node image (registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2) or if you need to go through a network proxy to do so, then the following instructions can be followed to make your Docker Client aware of a network proxy.

Here is an example of the error if Docker Client can not download the image:

# docker pull registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2
Error response from daemon: Get https://registry.tkg.vmware.run/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

If you are not using a private container registry with TKG, then you also need to also ensure that the KinD Cluster can connect to your network proxy when it pulls down the required containers from the internet. Luckily, KinD can simply detect the network proxy settings of your operating system. You can either set the proxy using traditional environmental variables (http_proxy, https_proxy and no_proxy) during your use of TKG CLI or you can simply set it globally so you do not forget.

In my setup, TKG CLI is running in a Photon OS VM and global proxy settings are configured in /etc/sysconfig/proxy Proxy settings will vary across operating systems and you should check with the vendor documentation for specific instructions. The following command will set both HTTP and HTTPS proxy variables to use my proxy server and you will also want to make sure you whitelist all networks and addresses which you want to by-pass the proxy.

cat > /etc/sysconfig/proxy << EOF
PROXY_ENABLED="yes"
HTTP_PROXY="http://192.168.1.3:3128"
HTTPS_PROXY="http://192.168.1.3:3128"
NO_PROXY="localhost,192.168.1.0/24,192.168.2.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"
EOF

Note: If you are using the TKG Demo Appliance, you only need to configure the Photon OS global proxy settings. In my example, I have white listed my local 192.168.* addresses, registry.rainpole.io which is the embedded Harbor registry, 10.2.224.4 which is the internal IP Address of VMC vCenter Server, *.svc addresses which all the internal K8s services and 100.64.0.0/13 which is the CIDR range used by TKG for the Service networks and 100.96.0.0/11 which is the CIDR range used by TKG Cluster networks.

[Read more...] about How to configure network proxy with Tanzu Kubernetes Grid (TKG)?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

Tanzu Kubernetes Grid (TKG) Demo Appliance for VMC and vSphere

05/11/2020 by William Lam 8 Comments

As some of you can probably tell from my recent Twitter updates and blog posts (here and here) that I have been spending some time lately with both vSphere with Kubernetes and Tanzu Kubernetes Grid (TKG). Like many of you in the community, I am still pretty new to Kubernetes (K8s) and I am still learning about what it has to offer both from an infrastructure standpoint but more importantly how it can be used to deliver new and modern applications. I am also very lucky to be part of the the VMware Event Broker Appliance Open Source Fling project which builds and runs on top K8s and this project has allowed me to really get hands on which is how I learn best.

A couple of months back I was asked to put together a workshop to demonstrate how to deploy TKG Clusters running on VMware Cloud on AWS (VMC) and while developing the workshop, I thought it would be really cool if I could make it even easier for anyone that is brand new to K8s to quickly get started with TKG. I wanted to have a solution that can literally be dropped into any supported vSphere-based environment with basic networking to go from Zero to Kubernetes in less than 30 minutes!

Enter the Demo Appliance for Tanzu Kubernetes Grid (TKG) Fling

A Virtual Appliance that pre-bundles all required dependencies to help customers in learning and deploying standalone Tanzu Kubernetes Grid (TKG) clusters running on either VMware Cloud on AWS and/or vSphere 6.7 Update 3 environment for Proof of Concept, Demo and Dev/Test purposes. This appliance will enable you to quickly go from zero to Kubernetes in less than 30 minutes with just an SSH client and a web browser!


In addition to the appliance, I have also put together a step by step workshop-style guide which not only walks you through in deploying your first TKG Cluster but also provide some example demos and references which you can explore further. Below are some of the highlights of the Demo Appliance for TKG:

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance for VMC and vSphere

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Cloud on AWS, VMware Tanzu Tagged With: Harbor, Kubernetes, Tanzu Kubernetes Grid, TKG, TKG CLI, VMware Cloud on AWS, vSphere 6.7 Update 3

Configure non-secure Harbor registry with Tanzu Kubernetes Grid (TKG)

05/09/2020 by William Lam 3 Comments

In an earlier blog post, I shared the steps to to configure Harbor with a proper signed SSL certificate that would serve as  private container registry for Tanzu Kubernetes Grid (TKG) CLI running in an air-gapped environment.

Although Harbor can easily be configured to support custom CA signed certificate, self-sign certificate and even just using HTTP, there are several additional steps and dependencies that is required if you wish to use a non-secure container registry with TKG CLI. This definitely was a bunch of trial/error and hopefully this can be made easier in the future to easily enable non-secure registry support with TKG CLI out of the box for development and testing purpose.

I also want to give a huge thanks to Jun Wang from our Modern Application Business Unit (MAPU), he was instrumental in helping me out and ultimately his tip on updating the containerd configuration was the last piece to the puzzle so that the K8s images deployed would use our insecure Harbor registry for pulling container images.

[Read more...] about Configure non-secure Harbor registry with Tanzu Kubernetes Grid (TKG)

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, Kubernetes, VMware Tanzu, vSphere Tagged With: Harbor, Kubernetes, Tanzu Kubernetes Grid, TKG, TKG CLI, VMware Tanzu

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy