• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vSphere 7.0

OVFTool 4.4.1 – Upload OVF/OVA from URL using upcoming “pull” mechanism

10/14/2020 by William Lam 2 Comments

I was helping a fellow colleague yesterday with an OVA question and I came to learn about an upcoming feature in the popular OVFTool utility that would allow for a new method of uploading a remote OVF/OVA to either a vCenter and/or ESXi endpoint.

Historically, when you upload an OVF/OVA whether that is stored locally or remotely from a URL, the data path will actually transfer through the system running the OVFTool between the source and destination, which is ultimately the ESXi host which performs the actual download. Although the OVF/OVA data is not actually stored on your local system, the traffic is proxied through your system and can add an unnecessary hop if the remote OVF/OVA URL can directly be accessed by ESXi host.

A new --pullUploadMode flag has been introduced in the latest OVFTool 4.4.1 release, which will allow ESXi host to directly download (pull) from the remote OVF/OVA URL, assuming it has connectivity. In addition to version of OVFTool, you will also need to have either ESXi 6.7 or 7.0 environment for this new feature to work.

Disclaimer: Although this feature is available in latest OVFTool release, it is still under development and should be considered a Beta feature in case folks are interested in trying it out.

Since the ESXi host is directly downloading from the remote source, there are two additional security verification that has already been implemented. The first is an additional vSphere Privilege called "Pull from URL" which is under the vApp section. Without this, you will get a permission denied error.


Secondly, in addition to specifying the new CLI option, you will also need to provide another flag called --sourceSSLThumbprint which should include the SHA1 hash of endpoint hosting the OVF/OVA. This is an additional verification to ensure the validity of the server hosting the OVF/OVA.

Here is an example of deploying my latest ESXi 7.0 Update 1 Virtual Appliance OVA which is remotely hosted. The quickest way to obtain the SHA1 thumbprint is simply opening browser to based URL which is https://download3.vmware.com/


You will need to replace the space with ":" (colon), so the final string should look like BA:C6:4E:D9:AD:D4:53:B5:86:5A:5D:70:36:CF:89:93:D1:6C:F9:63

Here is an example OVFTool command to deploy from the remote URL

ovftool \
--X:logFile="ovftool.log" \
--acceptAllEulas \
--allowAllExtraConfig \
--allowExtraConfig \
--noSSLVerify \
--sourceSSLThumbprint="BA:C6:4E:D9:AD:D4:53:B5:86:5A:5D:70:36:CF:89:93:D1:6C:F9:63" \
--name="Nested-ESXi-7.0-Update-1-Appliance" \
--datastore=sm-vsanDatastore \
--net:"VM Network"="VM Network" \
--pullUploadMode \
https://download3.vmware.com/software/vmw-tools/nested-esxi/Nested_ESXi7.0u1_Appliance_Template_v1.ova \
'vi://*protected email*:[email protected]/Primp-Datacenter/host/Supermicro-Cluster'

If we switch over to the vSphere UI, we should see a new task called "Download remote files" which indicates the new pull method is being leveraged. One thing to note is that because ESXi is now performing the download directly, progress may not be known by the OVFTool client, since it is not longer the source for the data transfer. Another thing to be aware of is that OVFTool itself has built-in retry logic in case there is a slight interruption during the data transfer with the current mechanisms. In the "pull" scenario, there is no retry by ESXi and so depending on connectivity, its possible deployments can fail and complete re-transfer would be required.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, OVFTool, vSphere 6.7, vSphere 7.0 Tagged With: ovftool, vSphere 6.7, vSphere 7.0

Automated vSphere with Tanzu Lab Deployment Script

10/13/2020 by William Lam 13 Comments

After sharing a sneak peak of my updated vSphere with Tanzu Automated Lab Deployment script on Twitter, I have been receiving non-stop requests on when the script will be available. It took a bit longer to finish off the documentation, creating the script was actually the easy part 😛

In any case, I am happy to finally share the automated script for deploying the new vSphere with Tanzu "Basic" which is included as part of vSphere 7.0 Update 1 is now available! You can find full details at the following Github repo: https://github.com/lamw/vsphere-with-tanzu-basic-automated-lab-deployment

In addition to the deployment instructions on the Github repo, I have also included a sample walkthrough which includes both deploying the vSphere with Tanzu environment as well as enabling Workload Management on the vSphere Cluster, which is not part of the automated deployment script.

I will also be updating my existing Workload Management PowerCLI Module to incorporate the new requirements for automating the enablement of Workload Management for a vSphere with Tanzu Basic Cluster. Together with this script, you will now have the ability to deploy vSphere with Tanzu end-to-end in under 1hr time!

More details will be shared in a later blog post and I hope folks enjoy the script, it was a ton of work!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: vSphere 7.0 Update 1, vSphere with Tanzu

Quick Tip – vmware-iso builder for Packer now supported with ESXi 7.0

10/12/2020 by William Lam 2 Comments

When vSphere 7.0 GA'ed earlier this year, one of the changes that I had noticed while going through the release notes was the removal of the VNC Server on ESXi. By default, this is disabled but users could enable it on a per-VM basis and connect to a specific VM using VNC. Not many customers used this feature and it made sense on why it was removed.

However, one implication is that if you use HashiCorp Packer and the vmware-iso builder to created automated images with ESXi, it will no longer work after upgrading to ESXi 7.0 as Packer relies on this VNC interface to send automated keystrokes to a VM as part of its automation. After learning about this change with vSphere 7.0, I filed a Packer Github Enhanacement to see if someone would be open to re-implementing the keystrokes functionality by leveraging the vSphere HTML5 Console SDK which would then allow for the use of VNC over websockets. The PR was closed about a month ago and while recently working on the vCenter Event Broker Appliance (VEBA) project, I finally got a chance to verify the feature after upgrading my physical ESXi host to latest 7.0 Update 1 and happy to share that the vmware-iso builder now functions as before.

The following two lines should be added to your Packer template:

"vnc_over_websocket": true
"insecure_connection": true

For reference, you can also refer to the VEBA Packer template

An alternative workaround is to use the vsphere-iso builder which leverages the vSphere USB scan codes API to send keystrokes into a VM without having to rely on the VNC interface. One downside is that you do need have a vCenter Server as the vsphere-iso builder interacts with the vSphere API on vCenter Server rather than directly going to ESXi and this would also impact anyone using Free ESXi to build their Packer images.

The primary reason that I had not switched over to the vsphere-iso builder was that I had quite a few Packer templates using the vmware-iso builder and the syntax was not portable between the two. For this reason alone, I decided to hold off upgrading my physical ESXi host to 7.0 until now.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 7.0 Tagged With: esxi, Packer, vnc, websocket

Workaround for ESXi-Arm in vSphere 7.0 Update 1

10/12/2020 by William Lam 4 Comments

In vSphere 7.0 Update 1, a new capability was introduced called the vCenter Cluster Services (vCLS) which provides a new framework for decoupling and managing distributing control plane services for vSphere. To learn more, I highly recommend the detailed blog post linked above by Niels. In addition, Duncan also has a great blog post about common question/answers and considerations for vCLS, which is definitely worth a read as well.

vSphere DRS is one of the vSphere features which relies on this new vCLS service and this is made possible by the vCLS VMs which are deployed automatically when it detects there are ESXi hosts within a vSphere Cluster (regardless if vSphere DRS is enabled or not). For customers who may be using the ESXi-Arm Fling with a vSphere 7.0 Update 1 environment, you may have noticed continuous "Delete File" tasks within vCenter that seems to loop forever.

This occurs because the vCLS service will first test to see if it can upload a file to the datastore, once it can, it will delete it. The issue is that the vCLS VMs are x86 and can not be deployed to an ESXi-Arm Cluster as the CPU architecture is not supported. There is a workaround to disable vCLS for the ESXi-Arm Cluster, which I will go into shortly. However, because vCLS can not properly deploy, it means vSphere DRS capabilities will not be possible when using vSphere 7.0 Update 1 with ESXi-Arm hosts. If this is desirable, it is recommended that to use either vSphere 7.0c or vSphere 7.0d if you wish to use vSphere DRS.

Note: vSAN does not rely on vCLS to function but to be able to use it, you must place your ESXi-Arm hosts into a vSphere Cluster and hence applying this workaround would be desirable for that use case.

[Read more...] about Workaround for ESXi-Arm in vSphere 7.0 Update 1

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi-Arm, vSphere 7.0 Tagged With: Arm, esxi, vCenter Clustering Services, vCLS, vSphere 7.0 Update 1

How to SSH to Tanzu Kubernetes Grid (TKG) Cluster in vSphere with Tanzu?

10/10/2020 by William Lam 6 Comments

For troubleshooting your vSphere with Tanzu environment, you may have a need to SSH to the Control Plane of your Tanzu Kubernetes Grid (TKG) Cluster. This was something I had to do to verify some basic network connectivity. At a high level, we need to login to our Supervisor Cluster and retrieve the SSH secret to our TKG Cluster and since this question recently came up, below are the instructions.


UPDATE (10/10/20) - It looks like it is also possible to retrieve the TKG Cluster credentials without needing SSH directly to the Supervisor Control Plane VM, see Option 1 for the alternate solution.

Option 1:

Step 1 - Login to the Supervisor Control Plane using the following command:

kubectl vsphere login --server=172.17.31.129 -u *protected email* --insecure-skip-tls-verify

Step 2 - Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 3 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Option 2:

Step 1 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM credentials:

/usr/lib/vmware-wcp/decryptK8Pwd.py

Step 2 - SSH to the IP Address using root username and the password provided from the previous command

Step 3- Next, we need to retrieve the SSH password secret for our TKG Cluster and perform a base64 decode to retrieve the plain text value. You will need two pieces of information and then substitute that into the command below

  • The name of your vSphere Namespace which was created in your vSphere with Tanzu environment, in my example it is called primp-industries
  • The name of your TKG Cluster, in my example it is called william-tkc-01 and the secret name will be [tkg-cluster-name]-ssh-password as shown in the example below

kubectl -n primp-industries get secrets william-tkc-01-ssh-password -o jsonpath={.data.ssh-passwordkey} | base64 -d

Step 4 - Finally, you can now SSH to TKG Cluster from a system which has network connectivity, this can be from the Supervisor Cluster Control Plane VM or another system. The SSH username for the TKG Cluster is vmware-system-user and use the credentials that was provided from the previous screen.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Kubernetes, VMware Tanzu, vSphere 7.0 Tagged With: Tanzu Kubernetes Grid, vmware-system-user, vSphere 7.0 Update 1, vSphere with Tanzu

ESXi 7.0 Update 1 now includes NIC driver for Intel NUC 10

09/21/2020 by William Lam 14 Comments

With the upcoming release of vSphere 7.0 Update 1 and specifically ESXi 7.0 Update 1, support for the onboard NIC of the Intel NUC 10 (Frost Canyon) is now included and the community ne1000 VIB driver is no longer needed. If you had previously installed the community driver, you can uninstall the VIB after successfully upgrading to ESXi 7.0 Update 1.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere 7.0 Tagged With: ESXi 7.0 Update 1, Intel NUC, vSphere 7.0 Update 1

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy