• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Automation

Programmatically interact with the VMware Product Lifecycle Matrix

12/01/2020 by William Lam 1 Comment

I recently came across a really cool automation solution from Dale Coghlan who built a PowerShell module to interact with the VMware Configuration Maximum (Config Max) Tool.

So, I published a thing today.... VMware.CfgMax - A PowerShell module to interact with https://t.co/NBrbCO3hcf https://t.co/RRQkh7ma1q

— Dale Coghlan (@DaleCoghlan) December 1, 2020

Although the Config Max tool does not currently provide an API, there is still a way to interact with it programmatically. Behind the scenes, the application uses JSON for its payload which can then be retrieved programmatically using PowerShell or any other language for that matter to perform an HTTP GET. I also know the Config Max team quite well, as I had worked with them to incorporate the VMware Cloud on AWS configuration maximums which also required a few enhancements to the tool. If you have any feedback, feel free to drop a comment and I will be happy to share it with them and one of my first asks when I met the team, was to provide a public REST API 🙂

After sharing Dale's tweet, I saw a question about doing something similiar for the VMware's Product Lifecycle Matrix, which is a website that helps customers understand the support lifecycle of a given VMware product/solution. The product lifecycle site has also been recently revamped and although it also does not have a public API, using Chrome Developer Tools (super useful tool) to quickly inspect, it looks like you can also programmatically grab the payload which also happens to be using JSON 🙂

Disclaimer: The VMware Product Lifecycle Matrix does not provide a public API, this also means there are no guarantees or compatibility that the trick outlined below will continue to work going forward. This is why you want to have a public, documented and supported API.

[Read more...] about Programmatically interact with the VMware Product Lifecycle Matrix

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation Tagged With: curl, powershell, product lifecycle matrix

How to build a customizable Raspberry Pi OS Virtual Appliance (OVA)?

11/16/2020 by William Lam 5 Comments

After posting the instructions on how to install Raspberry Pi (rPI) OS into a Virtual Machine running on ESXi-Arm, I was already thinking about an easier consumption method that not only benefited VMware customers interested in running rPI OS as a VM but also the larger rPI OS development community. Just imagine, you can now easily deploy, build and test multiple rPI OS/application on a single physical rPI and get all the benefits of vSphere that many customers have enjoyed for the past two decades. 

My goal was to build an rPI OS OVA that would enable some basic guest customization such as networking and configuring the password for the default pi user. As you can see from the screenshot below, I was able to accomplish this with minimal trial/error and works fantastic!


I was initially planning to release the rPI OS OVA as a VMware Fling which can then be made available to the community. However, due challenges in the way rPI OS is distributed today via an image file and the inclusion of packages that makes it difficult for redistribution, I decided to forgo the VMware Fling route and simply publish the instructions with some supplemental scripts that can be used to produce the same rPI OS OVA that I have built for my own personal use.

It would have been great if this could be made available and if anyone from Raspberry Pi organization is reading this and is interested in hosting the download, I would be more than welcome to provide you with OVA file.

[Read more...] about How to build a customizable Raspberry Pi OS Virtual Appliance (OVA)?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi-Arm Tagged With: Arm, ova, ovf, Raspberry Pi, Raspberry Pi OS

Automating kubectl-vsphere login for vSphere with Tanzu

11/12/2020 by William Lam 2 Comments

Before you can start deploying workloads to your vSphere with Tanzu Cluster, you need to first download the vSphere Plugin for Kubectl and then use that to login to your Supervisor Cluster which will generate a Kubernetes (K8s) context file that is stored in .kube/config

Here is an example of using the vSphere Plugin for Kubectl:

./kubectl-vsphere login --server=10.10.0.64 -u *protected email* --insecure-skip-tls-verify


For interactive sessions this is fine and upon successfully entering your password when prompted, you can switch to the correct K8s context to begin your workload deployment. For folks interested in automation, the one downside today is that the plugin does not provide a way to specify your password using either a command-line argument or reading from a configuration file.

I have actually seen this topic come up a few times both internally and externally for those wanting to automate the end to end deployment of a Tanzu Kubernetes Grid (TKG) Cluster and have gotten stuck on trying to figure a way around having to perform this required manual step.

[Read more...] about Automating kubectl-vsphere login for vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu Tagged With: expect, kubectl, vSphere with Tanzu

Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

11/10/2020 by William Lam 1 Comment

A few months back I saw that HashiCorp had released a new Kubernetes (K8s) Provider for Terraform, currently in Alpha state, which enable users to deploy K8s resources using the popular Infrastructure-as-Code (IaC) tool. I thought this would be pretty cool if it works with our vSphere with Tanzu solution, since the Tanzu Kubernetes Grid (TKG) Service uses ClusterAPI via a custom VM Operator to deploy TKG Guest Clusters which is just a fancy way of saying it uses K8s API to deploy more K8s 🙂

The setting up the new K8s provider was pretty straight forward and after spending a few minutes in figuring out how to convert my existing TKG YAML to the required HCL format for Terraform to understand, I was able to to run a terraform "plan" but quickly ran into the following error:

failed: admission webhook "default.mutating.tanzukubernetescluster.run.tanzu.vmware.com" does not support dry run

It looks like our tanzukubernetescluster admission webhooks does not currently support dry run operations which can be quite useful but also common when using Terraform. I figured this was the end of that idea and I ended up just filing a feature enhancement internally for adding this support in the future as I can see this being quite useful for our customers.

After finishing up recent pet project of getting a fully functional vSphere with Tanzu on a homelab budget and just using 32GB of memory, I decided to take another look at this and discovered the required tweak to get this working was super trivial, literally a single line change.

Disclaimer: This is not officially supported by VMware, use at your own risk.

[Read more...] about Using Terraform to deploy a Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Kubernetes, Tanzu Kubernetes Grid, Terraform, vSphere with Tanzu

Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

11/04/2020 by William Lam Leave a Comment

With the introduction of the vSphere Container Storage Interface (CSI) 2.1, it looks like the previous method outlined by Cormac Hogan no longer applies when looking to map between a vSphere Container Volume (V), which is a vSphere construct to the underlying Persistent Volume Claim (PVC), which is a Kubernetes construct.


William Arroyo, a K8s Solution Engineer recently noticed this behavior change and was asking if there was a way to use PowerCLI to still perform this look up. Given I had provided the original PowerCLI snippet on Cormac's blog, I was curious myself and since I had just rebuilt my vSphere with Tanzu environment, I figured I take a quick look to see where this new information might now be placed.

I did also want to mention that you can easily find this information using the vSphere UI by just clicking on the "Details" box next to the PVC


With the latest PowerCLI 12.1 release, we have a number of Cloud Native Storage (CNS) cmdlets that we can leverage and after a quick minute of poking around, this new information can be found using the Get-CnsVolume cmdlet and using the ExtensionData property to get more detailed properties.

[Read more...] about Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, PowerCLI Tagged With: CSI, Persistent Memory, vSphere Container Volume

Stateless ESXi-Arm with Raspberry Pi

11/03/2020 by William Lam 12 Comments

I am super excited to be able to finally share, what I think, is a really cool ESXi-Arm solution which has been an evolution of this and this. This solution also incorporates a number of automation techniques I have shared over the years when it comes to ESXi scripted installation aka Kickstart, so it was really neat to all those things get pulled into a single solution. Lastly, I also want to give huge thanks to Cyprien Laplace who threw the initial challenge my way after I had shared how to perform an ESXi-Arm scripted installation without using SD Card.

ESXi-x86 can be deployed using either a stateful or stateless installation. In the latter case, ESXi is booted over the network using the vSphere Auto Deploy feature in vCenter Server which does not require any local media for ESXi. Upon attaching itself to vCenter Server, Auto Deploy then leverages vSphere Host Profiles and its rules engine to determine which configurations or profiles should be applied to ensure the ESXi hosts are configured per their desired stated. Here is a quick video overview of how Auto Deploy and Host Profiles work.

Fundamentally, vSphere Auto Deploy and Host Profiles can also work with ESXi-Arm but today, vCenter Server would require some code modification for this to actually work.

OK, so am I teasing you with something that does not exists? Nope, but I just wanted to help set the context 🙂

The solution that I have created boots ESXi-Arm over the network in a "stateless" manner, so there is no need for an SD Card or USB device plugged into the Raspberry Pi (rPI). In addition to the ESXi-Arm files, it also includes a custom payload which runs to retrieve additional configurations which can automatically join a desired vCenter Server as well as apply further customizations of an ESXi-Arm host. As you can see, this solution behaves similar to that of vSphere Auto Deploy and Host Profiles but does not use either of these vSphere features and works with the ESXi-Arm Fling right now.

Technically speaking, these techniques can also be applied to ESXi-x86 but I will leave that to the reader for further exploration.

[Read more...] about Stateless ESXi-Arm with Raspberry Pi

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi-Arm Tagged With: Arm, esxi, Raspberry Pi, stateless

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 75
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy