• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

PowerCLI

Which VM was this vSphere VM cloned from?

01/11/2021 by William Lam Leave a Comment

This was a question that I saw back in December on the VMware {code} Slack which was quickly answered by the always awesome Luc Dekens. The solution is to look at vCenter Server Events, which are super rich in information and can be used for a number of things including identifying the source VM that it was cloned from. When I was a customer, this was something I did all the time, using events for auditing purposes but also identifying who, what and when a certain operation was performed including source VMs for cloning operations.

Although this information maybe known to some, there is still not an elegant solution that can help someone quickly identify the source VM for a specific vSphere VM that was cloned. This topic also intrigued me as I have seen this question come up in the past. I figure I might as well add this to my random scripting backlog and take a look when I had some time.

Before taking a look at the solution, it is important to understand the different types of clones that exists in vSphere today and also the respective vCenter Server events that can help us correlate to both the source VM but also the specific clone type.

Cloning Types

  • Full Clone - An independent copy of a virtual machine that shares nothing with the parent virtual machine after the cloning operation. Ongoing operation of a full clone is entirely separate from the parent virtual machine
  • Linked Clone - A copy of a virtual machine that shares virtual disks with the parent virtual machine in an ongoing manner. This conserves disk space, and allows multiple virtual machines to use the same software installation
  • Instant Clone - An independent copy of a virtual machine that starts executing from the exact running state of the source powered on virtual machine. Instant Clone uses rapid in-memory cloning of a running parent virtual machine and copy-on-write, simliar to that of Linked Cloning to rapidly deploy virtual machines

[Read more...] about Which VM was this vSphere VM cloned from?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, vSphere Tagged With: clone, instant clone, linked clones, PowerCLI

Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

11/04/2020 by William Lam Leave a Comment

With the introduction of the vSphere Container Storage Interface (CSI) 2.1, it looks like the previous method outlined by Cormac Hogan no longer applies when looking to map between a vSphere Container Volume (V), which is a vSphere construct to the underlying Persistent Volume Claim (PVC), which is a Kubernetes construct.


William Arroyo, a K8s Solution Engineer recently noticed this behavior change and was asking if there was a way to use PowerCLI to still perform this look up. Given I had provided the original PowerCLI snippet on Cormac's blog, I was curious myself and since I had just rebuilt my vSphere with Tanzu environment, I figured I take a quick look to see where this new information might now be placed.

I did also want to mention that you can easily find this information using the vSphere UI by just clicking on the "Details" box next to the PVC


With the latest PowerCLI 12.1 release, we have a number of Cloud Native Storage (CNS) cmdlets that we can leverage and after a quick minute of poking around, this new information can be found using the Get-CnsVolume cmdlet and using the ExtensionData property to get more detailed properties.

[Read more...] about Mapping between vSphere Container Volume to Persistent Volume Claim (PVC) in vSphere 7.0 Update 1 using PowerCLI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, PowerCLI Tagged With: CSI, Persistent Memory, vSphere Container Volume

Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

11/02/2020 by William Lam Leave a Comment

When deploying the HAProxy VM as part of vSphere with Tanzu, customers have the option of deploying the HAProxy VM using either a 2-NIC or 3-NIC configuration. The default OVF Deployment Option is the 2-NIC design called "Default" and the 3-NIC design is called "Frontend".

From an Automation point of view, you can use either OVFTool or PowerCLI to automate the deployment. For a 2-NIC example, you can refer to my Automated vSphere with Tanzu Lab Deployment Script. However, for the 3-NIC example, a few folks were running into some issues when using PowerCLI for the automation.

The main issue is that because the default OVF Deployment Option is the 2-NIC design (Default), the two additional OVF properties frontend_ip and frontend_gateway is basically hidden when processing the OVF properties when PowerCLI.

Note: You can view these optional properties by running the following OVFTool command: ovftool --X:enableHiddenProperties vmware-haproxy-v0.1.8.ova


Even if you specified the "Frontend" OVF Deployment Option, PowerCLI does not seem to have the logic to retrieve the other optional parameters and hence can not be set as part of the initial deployment.

[Read more...] about Automating HAProxy VM deployment with 3-NIC configuration using PowerCLI

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VMware Tanzu Tagged With: HAProxy, PowerCLI, vSphere with Tanzu

Using PowerCLI and vSphere Tags to create/migrate HCX Mobility Groups to VMware Cloud SDDC

10/21/2020 by William Lam Leave a Comment

If using your voice to create an HCX Mobility Group and initiate a migration to a VMware Cloud SDDC is not your thing, here is a more practical example using PowerCLI which includes HCX cmdlets that was introduced awhile back.


Here are the 12 configurable variables that you will need to update based on your own environment.

PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$VC_SERVER="vcsa.vmware.corp"
$VC_USERNAME="*protected email*"
$VC_PASSWORD="VMware1!"
$HCX_SERVER="hcx.vmware.corp"
 
$VSPHERE_TAG_CATEGORY="Cloud"
$VSPHERE_TAG_NAME="VMC"
 
# vMotion, Bulk, Cold, RAV, OsAssistedMigration
$MIGRATION_TYPE="RAV"
 
$TARGET_NETWORK_NAME="L2E_HOL-10-f58e483b"
$TARGET_DATASTORE_NAME="WorkloadDatastore"
$TARGET_RESOURCE_POOL_NAME="Compute-ResourcePool"
$TARGET_VM_FOLDER_NAME="Workloads"
 
$MOBILITY_GROUP_NAME="VMworld-2020-Demo"

[Read more...] about Using PowerCLI and vSphere Tags to create/migrate HCX Mobility Groups to VMware Cloud SDDC

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Azure VMware Solution, Google Cloud VMware Engine, Oracle Cloud VMware Solution, PowerCLI, VMware Cloud, VMware Cloud on AWS Tagged With: HCX, Mobility Group, PowerCLI, tag, VMware Cloud, VMware Cloud on AWS

Automating Workload Management on vSphere with Tanzu

10/20/2020 by William Lam 6 Comments

As promised, here is the complimentary solution to my existing Automated vSphere with Tanzu Lab Deployment Script, which will automatically deploy and configure the required infrastructure (vCenter Server Appliance, ESXi, vSAN and HAProxy VMs) so that you can quickly jump to enabling Workload Management on your vSphere Cluster.

FYI: Ben Corrie, one of the Engineers on the vSphere with Tanzu team recently published a vSphere with Tanzu 4-Part Deep Dive video series where he walks you through in deploying everything from scratch along with the concepts that should help you better understand how vSphere with Tanzu works. He is actually doing this in his own personal homelab and thought this might be useful to share with others. Kudos Ben and highly recommend folks check out his video if you new to vSphere with Tanzu and Kubernetes.


Enabling Workload Management is a manual step after the automated deployment script and as you know, I prefer to automate as much as I can. I have updated my existing PowerCLI Workload Management Module to now also support the new vSphere with Tanzu capability using HAProxy for networking instead of NSX-T. The module can be downloaded from PowerShell Gallery by simply running

[Read more...] about Automating Workload Management on vSphere with Tanzu

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VMware Tanzu Tagged With: PowerCLI, vSphere with Tanzu, Workload Management

Using ESXi-Arm Fling as a lightweight vSphere Automation environment for PowerCLI and Terraform

10/09/2020 by William Lam 1 Comment

A set of use cases that I was really excited for when I first heard about ESXi-Arm a few years ago was around the topic of vSphere Automation and Development. I speak with many customers who are just starting out on their Automation journey whether that is using PowerCLI, one of our many vSphere Automation SDK or even directly to the new vCenter REST API which all new features are being exposed through these days.

One of the biggest challenge for new comers is simply getting access to hardware that they can start playing around with and although there are is plethora of vSphere Homelab choices, it does require some amount of investment, which is definitely worth it in the long run. However, if you are just getting started and maybe you want something that is a bit more lighter weight, there are not too many options outside of an Intel NUC. I know many consultants actually carry around an Intel NUC that contains several VM images that they use to with their clients, including demos.

With the small form factor, low cost and reduced power consumption of the Raspberry Pi, I think this really opens up the door for some interesting creative solutions:

  • Basic vSphere footprint that can be used for work or learning purposes
  • Easy way to learn and explore the vSphere API with an actual host and enabling real VM deployments
  • Trying out Infrastructure-as-Code (IaC) tools such as Terraform and Ansible
  • Quick way to run through basic demos in front of customers
  • On-demand and self-contained lab environment for small Hackathon at your local VMUG or even at VMworld

Something I was really interested in early on was to be able to use ESXi-Arm with the Raspberry Pi to not only have a basic ESXi environment but also have PowerCLI environment up and running in an Arm VM. My first thought was to get this setup using Photon OS, which not only has Arm distribution but also has support for Powershell and PowerCLI. I was hoping with some tinkering, I could easily get Powershell for Arm to run on PhotonOS (which it did) but I then ran into issues installing PowerCLI itself.

I decided to give up for now and take a look at Ubuntu which also supports Powershell for Arm, but the Microsoft documentation only listed instructions for 32-bit and ESXi-Arm requires a 64-bit. Taking a look at the Powershell release files, I noticed there was 64-bit package and with a few minor adjustments to the commands, I got PowerCLI installed and connected back to my rPI which was attached to my x86 vCenter Server!

[Read more...] about Using ESXi-Arm Fling as a lightweight vSphere Automation environment for PowerCLI and Terraform

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi-Arm, PowerCLI, vSphere Tagged With: Arm, esxi, PowerCLI, Terraform

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 19
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy