• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Quick Tip – Rebooting VCSA causes vSphere with Tanzu to show ESXi hosts not licensed for Workload Management

11/08/2020 by William Lam 3 Comments

I had just setup a new vSphere with Tanzu environment running on my Intel NUC for an upcoming blog post and after rebooting the vCenter Server Appliance (VCSA), I had noticed the Workload Management UI threw the following licensing error:

None of the hosts connected to this vCenter are licensed for Workload Management.

This was quite strange since both the ESXi host and VCSA was just installed less than a day ago which I was using the default 60 day evaluation that is automatically built in.

The even weirder thing was that I was still able perform operations using the Workload Management APIs, so I figured this must be a vSphere UI bug but could not find a way to get the UI to display. After reaching out to some folks internally, a suggestion was given on using either incognito mode or another browser and to my surprise, that fixed the problem! I suspect there is some cookie that was set during the initial Workload Management enablement when going through the evaluation workflow which now causes this unexpected early check for licensing.

I have already filed an internal bug but if you do hit this problem, simply clear your cookies for the the VCSA and the Workload Management UI will not properly display again.

Filed Under: VMware Tanzu Tagged With: vSphere with Tanzu

TKG Demo Appliance on VMware Cloud on DellEMC

11/05/2020 by William Lam Leave a Comment

We have been getting interests from customers on wanting to run Tanzu Kubernetes Grid (TKG) on our VMware Cloud on DellEMC (VMConDellEMC) offering and I was asked to see if my Tanzu Kubernetes Grid (TKG) Demo Appliance would also work on this VMware Cloud solution, especially as it works great on both VMware Cloud on AWS as well as existing premises vSphere 6.7 Update 3 or later environments.

With the help from our VMConDellEMC team, I got access to an SDDC and was able to validate that everything works as outlined in my TKG workshop guide. I have also updated the pre-req documentation to include a specific section for setting up VMConDellEMC SDDC, most of which is similiar to existing networking requirements. Once you have your customer uplink network configured to your VMConDellEMC SDDC, you will be able to reach the TKG Demo Appliance running on the NSX-T Segment. The thing about the setup is that TKG Demo Appliance is built in an air-gap fashion, so no internet access is required, which by default, the TKG CLI will assume. This is great way to quickly get started with TKG and playing with Kubernetes!


This was actually my first time using VMConDellEMC and I thought I would push the limits a bit and deploying a slightly larger TKG Workload Cluster than I normally would, especially since I got access to a 5-Node SDDC 😀

[Read more...] about TKG Demo Appliance on VMware Cloud on DellEMC

Filed Under: VMware Cloud Tagged With: VMware Cloud, VMware Cloud on Dell EMC

Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

11/04/2020 by William Lam Leave a Comment

With the introduction of the vSphere Container Storage Interface (CSI) 2.1, it looks like the previous method outlined by Cormac Hogan no longer applies when looking to map between a vSphere Container Volume (V), which is a vSphere construct to the underlying Persistent Volume Claim (PVC), which is a Kubernetes construct.


William Arroyo, a K8s Solution Engineer recently noticed this behavior change and was asking if there was a way to use PowerCLI to still perform this look up. Given I had provided the original PowerCLI snippet on Cormac's blog, I was curious myself and since I had just rebuilt my vSphere with Tanzu environment, I figured I take a quick look to see where this new information might now be placed.

I did also want to mention that you can easily find this information using the vSphere UI by just clicking on the "Details" box next to the PVC


With the latest PowerCLI 12.1 release, we have a number of Cloud Native Storage (CNS) cmdlets that we can leverage and after a quick minute of poking around, this new information can be found using the Get-CnsVolume cmdlet and using the ExtensionData property to get more detailed properties.

[Read more...] about Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

Filed Under: Automation, Cloud Native, PowerCLI Tagged With: CSI, Persistent Memory, vSphere Container Volume

Configure network proxy using YTT with Tanzu Kubernetes Grid (TKG)

11/04/2020 by William Lam 1 Comment

I was doing some work with Tanzu Kubernetes Grid (TKG) 1.2 using my TKG Demo Appliance Fling and the environment that I was working in did not have direct internet access, which is usually the case for most Production environment. I needed to have outbound connectivity from the TKG Worker Nodes so that they could pull down a set of containers as part of attaching to our Tanzu Mission Control (TMC) service.

Luckily, there was an HTTP proxy server that I could use for this connectivity and we just need to update our TKG templates so the TKG worker nodes will have the proxy settings. In the past, when needing to apply such customizations such as adding a network proxy to TKG, it meant I had to manually edit the TKG Dev/Prod YAML files. As previously shared, Tanzu Kubernetes Grid (TKG) 1.2 now uses the YAML Templating Tool (YTT) tool for customizing TKG plans.

Although the TKG documentation provides an example for YTT template example, it did not actually cover the TKG Worker Nodes which is what I needed but also that I needed to add a command into the postKubeadmCommands for the network proxy to be activated. The issue is that this section no longer exists in the base template like it did in previous versions of TKG and required some additional YTT annotation to get this working.

Here is the complete working ~/.tkg/providers/infrastructure-vsphere/ytt/proxy_nameserver.yaml template that adds the respective HTTP(S) proxy server and No Proxy settings.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#@ load("@ytt:overlay", "overlay")
 
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #! Add HTTP_PROXY to containerd configuration file
    #@overlay/append
    - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/append
    - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
    #@overlay/match missing_ok=True
    postKubeadmCommands:
    #@overlay/append
    - systemctl restart containerd
 
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
spec:
  template:
    spec:
      preKubeadmCommands:
      #! Add HTTP_PROXY to containerd configuration file
      #@overlay/append
      - echo $'[Service]\nEnvironment="HTTP_PROXY=http://1.2.3.4:3128/"' > /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="HTTPS_PROXY=http://1.2.3.4:3128"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/append
      - echo 'Environment="NO_PROXY=localhost,192.168.4.0/24,192.168.3.0/24,registry.rainpole.io,10.2.224.4,.svc,100.64.0.0/13,100.96.0.0/11"' >> /etc/systemd/system/containerd.service.d/http-proxy.conf
      #@overlay/match missing_ok=True
      postKubeadmCommands:
      #@overlay/append
      - systemctl restart containerd

Filed Under: Kubernetes, VMware Tanzu Tagged With: http proxy, proxy, Tanzu Kubernetes Grid

New SDDC Linking capability for VMware Cloud on AWS

11/03/2020 by William Lam Leave a Comment

Back in September, the VMware Transit Connect (vTGW) on VMware Cloud on AWS (VMConAWS) feature was released and provides users a simplified way of connecting AWS VPCs, AWS Direct Connect Gateways and customer on-premises datacenter from a networking connectivity standpoint. As part of this feature, a new logical construct called an SDDC Group was created which allows customers to easily apply common networking connectivity policies across a number of SDDCs versus having to manage them separately which can quickly get complex from an operational point of view.

The SDDC Group not only simplified the initial setup, but it also simplifies Day 2 Operations when new SDDCs are provisioned and added to the SDDC group. The networking policies that have been configured at the SDDC Group will automatically apply to all new SDDCs which makes this a really slick solution. As SDDCs are removed from the SDDC Group, the related configurations are automatically un-provisioning and detached from the respective networking resources.


Simplified network connectivity using an SDDC Group was just the beginning! Today, the VMware Cloud team has released a new feature built on top of the SDDC Groups construct called vCenter Linking for SDDC Groups. Just as the name implies, customers can now easily "Link" multiple vCenter Servers within an SDDC Group enabling a single view of all vCenter Servers using any one of the vSphere UIs within the SDDC. For those familiar with Enhanced Linked Mode (ELM), this is basically that but for SDDCs running in the Cloud!

The workflow could not have been simpler and last week I got try it out and was quite impressed! Under the hood, this leverages the vCenter Convergence capability and when enabling vCenter Linking, the service automatically handles all those details including the necessary NSX-T firewall rules that need to be configured across ALL SDDC to allow for secured connectivity. Just imagined having to do this each time a new SDDC is added or remove, you need to manually go to all SDDC and update or create new firewall rules!? This is all hidden away from the user and by simply associating SDDCs in the SDDC Group, the configurations are applied automatically for you.

Just setup an upcoming feature which builds on top of VMware Transit Connect Gateway (vTGW) allowing #VMWonAWS customers to now “Link” multiple SDDCs together. Just 1-Click, you now can access all Cloud vCenter Servers using any one vSphere UI. ELM for Cloud!#VMwareCloud pic.twitter.com/dImg6Yloe3

— William Lam (@lamw) October 30, 2020

One question that I did have while trying out this new feature was how does this work with existing features such as Hybrid Linked Mode (HLM) and ELM?

[Read more...] about New SDDC Linking capability for VMware Cloud on AWS

Filed Under: VMware Cloud, VMware Cloud on AWS Tagged With: ELM, Enhanced Linked Mode, HLM, Hybrid Linked Mode, SDDC Group, VMware Cloud, VMware Cloud on AWS

Stateless ESXi-Arm with Raspberry Pi

11/03/2020 by William Lam 12 Comments

I am super excited to be able to finally share, what I think, is a really cool ESXi-Arm solution which has been an evolution of this and this. This solution also incorporates a number of automation techniques I have shared over the years when it comes to ESXi scripted installation aka Kickstart, so it was really neat to all those things get pulled into a single solution. Lastly, I also want to give huge thanks to Cyprien Laplace who threw the initial challenge my way after I had shared how to perform an ESXi-Arm scripted installation without using SD Card.

ESXi-x86 can be deployed using either a stateful or stateless installation. In the latter case, ESXi is booted over the network using the vSphere Auto Deploy feature in vCenter Server which does not require any local media for ESXi. Upon attaching itself to vCenter Server, Auto Deploy then leverages vSphere Host Profiles and its rules engine to determine which configurations or profiles should be applied to ensure the ESXi hosts are configured per their desired stated. Here is a quick video overview of how Auto Deploy and Host Profiles work.

Fundamentally, vSphere Auto Deploy and Host Profiles can also work with ESXi-Arm but today, vCenter Server would require some code modification for this to actually work.

OK, so am I teasing you with something that does not exists? Nope, but I just wanted to help set the context 🙂

The solution that I have created boots ESXi-Arm over the network in a "stateless" manner, so there is no need for an SD Card or USB device plugged into the Raspberry Pi (rPI). In addition to the ESXi-Arm files, it also includes a custom payload which runs to retrieve additional configurations which can automatically join a desired vCenter Server as well as apply further customizations of an ESXi-Arm host. As you can see, this solution behaves similar to that of vSphere Auto Deploy and Host Profiles but does not use either of these vSphere features and works with the ESXi-Arm Fling right now.

Technically speaking, these techniques can also be applied to ESXi-x86 but I will leave that to the reader for further exploration.

[Read more...] about Stateless ESXi-Arm with Raspberry Pi

Filed Under: Automation, ESXi-Arm Tagged With: Arm, esxi, Raspberry Pi, stateless

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Interim pages omitted …
  • Go to page 202
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy