• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Kubernetes

Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

10/20/2020 by William Lam Leave a Comment

With previous releases of Tanzu Kubernetes Grid (TKG), if you needed to apply special OS customizations that were applied to the deployed Control Plane and Worker Node VMs, such as injecting commands to handle network proxy or dealing with insecure container registry, your only option was to hand edit the default TKG Dev/Prod YAML templates. Not only was this error prone but because the templates can change from each release, it was difficult to manage and test until you attempted a deployment.

One of the newest features with the release of TKG 1.2 is official support for customizing the Kubernetes (K8s) Cluster Templates Plans using YTT (YAML Templating Tooling) which allows users to provide custom data that can then be patched/overlay to an existing YAML file. YTT itself is part of a larger toolset for building, creating and configuring deployments for K8s called Carvel. The Domain Specific Language (DSL) that YTT uses was not exactly intuitive but since the official TKG documentation had an example to start with, I was able to mostly figure my way through along with some tips from the #carvel Slack channel.

So what was I trying to do? I was working on updating my TKG Demo Appliance Fling to the latest 1.2 release and part of the setup required adding an entry to /etc/hosts file on all TKG VMs that are deployed. Instead of directly messing with the YAML templates, there is now a new "overlay" YAML file in ~/.tkg/providers/infrastructure-vsphere/ytt/vsphere-overlay.yaml which can be used to make such changes.

[Read more...] about Customizing Kubernetes cluster template (Dev/Prod) plans in Tanzu Kubernetes Grid 1.2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, ytt

Kubernetes on ESXi-Arm using k3s

10/16/2020 by William Lam 11 Comments

The tiny form factor of a Raspberry Pi (rPI) is a fantastic hardware platform to start playing with the ESXi-Arm Fling. You can already do a bunch of fun VMware things like running a lightweight vSAN Witness Node to setting up basic automation environment for PowerCLI, Terraform and Packer to running rPI OS as VM, enabling some neat use cases like consolidating your physical rPI assets which might be running RetroPi and Pi-Hole which many home labbers are doing.

In addition to VMware solutions, its is also a great platform to learn and tinker with new technologies like Kubernetes (K8s) which I am sure many of you have been hearing about 🙂 Although our vSphere with Tanzu and Tanzu Kubernetes Grid (TKG) does not currently work with the ESXi-Arm Fling, I have actually been meaning to try out a super lightweight K8s distribution designed for IoT/Edge called k3s (pronounced k-3-s) which also recently joined Cloud Native Computing Foundation (CNCF) Sandbox level.

k3s is supported on rPI and you normally would have multiple rPI devices to represent the number of nodes, for example if you want a basic 3-Node cluster, you would need three physical rPI devices. With ESXi-Arm, you can now create these nodes as VM, using just a single rPI. This opens up the door for all sorts of explorations, you can create HA cluster or try out more advanced features which might be more difficult if you needed several physical devices. If you mess up, you can simply re-deploy the VM without much pain or simply clone the VM.

In my setup, I am using 3 x Photon OS VMs. One for the primary node and two for k3s worker nodes. You can certainly install k3s on any other Arm-based OS including rPI OS (which can now run as a VM as mentioned earlier).


[Read more...] about Kubernetes on ESXi-Arm using k3s

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, Kubernetes Tagged With: Arm, esxi, k3s, Kubernetes

Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3

08/10/2020 by William Lam 1 Comment

It has been awhile since I have updated my Tanzu Kubernetes Grid (TKG) Demo Appliance Fling, which is a virtual appliance that enables anyone to go from zero to Kubernetes in less than 30 minutes with just an SSH client and a web browser. For VMware Cloud on AWS customers interested in running TKG, this is a great way to quickly get started on a proof of concept, demo or for development and testing purposes. One great benefit is that everything required for TKG is self contained within the appliance including an embedded Harbor registry and the respective TKG container images, great for air-gapped or non-internet accessible environments.

Here is a summary of what is new:

Support for latest TKG 1.1.3

There have been several of smaller releases to TKG since their 1.0.0 release but due to their short lifecycle, I decided to hold off. Behind the scenes, I have actually been working closely with TKG team on the latest TKG 1.1.3 release which was just release last week. One really cool feature that was introduced in TKG 1.1.2 is the ability to upgrade an existing TKG Workload Cluster to a newer version of Kubernetes.

With TKG 1.1.3, support for Kubernetes v1.18.6 and v1.17.9 is now possible and the latest version of the demo appliance will also support this workflow. In fact, I have also updated my TKG Workshop Guide to include all new updates including the upgrade workflow. To reduce the maintenance burden on myself, the TKG Demo Appliance 1.0.0 will be removed in the near future, for now it has been deprecated but all existing content is still available. I highly recommend checking out the latest version as you will get all the latest features of TKG.

[Read more...] about Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Cloud on AWS, VMware Tanzu Tagged With: Kubernetes, Tanzu Kubernetes Grid, TKG, VMware Cloud on AWS, VMware Tanzu

Using the new installation method for deploying OpenShift 4.5 on VMware Cloud on AWS

07/18/2020 by William Lam 1 Comment

I recently saw a tweet from Jason Shiplett who works over on the VMware Validated Design (VVD) team (also my former team before joining VMware Cloud) who shared a new validated design for running Redhat OpenShift 4.3 on VMware Cloud Foundation. Funny enough, a couple of days ago I was just researching into deploying OpenShift running on VMware Cloud on AWS from a customer inquiry.

Timing could not have been better as RedHat just announced their OpenShift 4.5 release a few days ago as and one of the major updates is support for vSphere using their full stack automation also known as te Installer Provisioned Infrastructure (IPI) option. Previous to this, customers who wanted to deploy OpenShift on vSphere had to use the User Provisioned Infrastructure (UPI) method, which the VVD design also uses, which is much lengthier and complex when compared to the native IPI method.

For someone who has never worked with OpenShift before, this was great news and I get to try out this new deployment method on an VMware Cloud on AWS infrastructure 🙂

Pre-Requisites:

Step 1 - You will need a Linux system to perform the installation and it should have access to the vCenter Server running in VMware Cloud on AWS (VMC). In my example, I am using an Ubuntu Server 20.04 VM which is also running in the SDDC and has outbound internet connectivity.

Step 2 - Login to VMware Cloud on AWS console and create a new NSX-T network segment that is DHCP enabled. In my example, I named it openshift-network with a 192.168.3.0/24 configuration.


Step 3 - Navigate to Inventory->Groups and create the following groups and replace the CIDR networks with that of your SDDC:

Group Name IP Address Members
Compute OpenShift Network 192.168.3.0/24
Compute SDDC Management Network 10.2.0.0/16
Management OpenShift Network 192.168.3.0/24

[Read more...] about Using the new installation method for deploying OpenShift 4.5 on VMware Cloud on AWS

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Cloud on AWS Tagged With: Kubernetes, OpenShift, VMware Cloud on AWS

Interesting Kubernetes application demos

06/08/2020 by William Lam 3 Comments

I am always on the lookout for cool and interesting demos to deploy, especially with some of the work I have been doing lately with vSphere with Kubernetes (K8s) and Tanzu Kubernetes Grid (TKG). I am sure many of you have probably seen the basic wordpress demos which seems to be the typical "Hello World" app for K8s and having something more compelling not only makes the demo more interesting but it can also help folks better understand how a modern applications can be built, deployed and run.

Below is a list of of the K8s demo applications that I have come across as part of my exploration and by no means is this an exhaustive list. I have been able to successfully deploy these applications running on the latest version of K8s (1.17 and 1.18) as I did come across other demos which did not work or I had issues setting up. If there are other K8s demos that folks have used, feel free to leave a comment and I will update the blog post after doing some basic testing.

For those of you who may not have a K8s environment and is running either vSphere 6.7 Update 3 or have access to a VMware Cloud on AWS SDDC, you can easily setup a TKG Cluster in under 30 minutes leveraging my TKG Demo Appliance Fling.

[Read more...] about Interesting Kubernetes application demos

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Cloud Native, Kubernetes, VMware Tanzu Tagged With: Kubernetes

Setup custom login banner when logging into a vSphere with Kubernetes Cluster

05/20/2020 by William Lam Leave a Comment

While working on my PowerCLI module for enabling workload management for a vSphere with Kubernetes (K8s) Cluster, I came to discover a pretty cool feature that is only available when using the vSphere with K8s API to enable Workload Management on a vSphere Cluster.

As part of the enablement spec, there is a new property called login_banner. Taking a closer look, this property allows you to specify a custom message that would be displayed as part of the initial login to your vSphere with K8s Cluster using the vSphere kubectl plugin. This is similar to an SSH login banner which can be used to provide internal disclaimers and/or additional instructions for your end users.

Here is an example of what the login banner can look like. Yup, vSphere with K8s supports emojis or rather the terminal you are using to login can potentially render emojis 😀


The good news is that I have already added this feature into the new New-WorkloadManagement function and you can specify a message by adding the -LoginBanner parameter.

For those interested in rendering emojis within their banner, you can take a look at the following example and you can find the complete list of emoji unicodes here.

PowerShell
1
2
3
4
5
$LoginBanner = "
 
" + [char]::ConvertFromUtf32(0x1F973) + "vSphere with Kubernetes Cluster enabled by virtuallyGhetto " + [char]::ConvertFromUtf32(0x1F973) + "
 
"

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, Kubernetes, vSphere 7.0 Tagged With: kubectl, Kubernetes, vSphere 7, vSphere with Kubernetes

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy