• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

ovftool

OVFTool 4.4.1 – Upload OVF/OVA from URL using upcoming “pull” mechanism

10/14/2020 by William Lam 2 Comments

I was helping a fellow colleague yesterday with an OVA question and I came to learn about an upcoming feature in the popular OVFTool utility that would allow for a new method of uploading a remote OVF/OVA to either a vCenter and/or ESXi endpoint.

Historically, when you upload an OVF/OVA whether that is stored locally or remotely from a URL, the data path will actually transfer through the system running the OVFTool between the source and destination, which is ultimately the ESXi host which performs the actual download. Although the OVF/OVA data is not actually stored on your local system, the traffic is proxied through your system and can add an unnecessary hop if the remote OVF/OVA URL can directly be accessed by ESXi host.

A new --pullUploadMode flag has been introduced in the latest OVFTool 4.4.1 release, which will allow ESXi host to directly download (pull) from the remote OVF/OVA URL, assuming it has connectivity. In addition to version of OVFTool, you will also need to have either ESXi 6.7 or 7.0 environment for this new feature to work.

Disclaimer: Although this feature is available in latest OVFTool release, it is still under development and should be considered a Beta feature in case folks are interested in trying it out.

Since the ESXi host is directly downloading from the remote source, there are two additional security verification that has already been implemented. The first is an additional vSphere Privilege called "Pull from URL" which is under the vApp section. Without this, you will get a permission denied error.


Secondly, in addition to specifying the new CLI option, you will also need to provide another flag called --sourceSSLThumbprint which should include the SHA1 hash of endpoint hosting the OVF/OVA. This is an additional verification to ensure the validity of the server hosting the OVF/OVA.

Here is an example of deploying my latest ESXi 7.0 Update 1 Virtual Appliance OVA which is remotely hosted. The quickest way to obtain the SHA1 thumbprint is simply opening browser to based URL which is https://download3.vmware.com/


You will need to replace the space with ":" (colon), so the final string should look like BA:C6:4E:D9:AD:D4:53:B5:86:5A:5D:70:36:CF:89:93:D1:6C:F9:63

Here is an example OVFTool command to deploy from the remote URL

ovftool \
--X:logFile="ovftool.log" \
--acceptAllEulas \
--allowAllExtraConfig \
--allowExtraConfig \
--noSSLVerify \
--sourceSSLThumbprint="BA:C6:4E:D9:AD:D4:53:B5:86:5A:5D:70:36:CF:89:93:D1:6C:F9:63" \
--name="Nested-ESXi-7.0-Update-1-Appliance" \
--datastore=sm-vsanDatastore \
--net:"VM Network"="VM Network" \
--pullUploadMode \
https://download3.vmware.com/software/vmw-tools/nested-esxi/Nested_ESXi7.0u1_Appliance_Template_v1.ova \
'vi://*protected email*:[email protected]/Primp-Datacenter/host/Supermicro-Cluster'

If we switch over to the vSphere UI, we should see a new task called "Download remote files" which indicates the new pull method is being leveraged. One thing to note is that because ESXi is now performing the download directly, progress may not be known by the OVFTool client, since it is not longer the source for the data transfer. Another thing to be aware of is that OVFTool itself has built-in retry logic in case there is a slight interruption during the data transfer with the current mechanisms. In the "pull" scenario, there is no retry by ESXi and so depending on connectivity, its possible deployments can fail and complete re-transfer would be required.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, OVFTool, vSphere 6.7, vSphere 7.0 Tagged With: ovftool, vSphere 6.7, vSphere 7.0

NSX-T Edge OVF property to automatically join NSX-T Management Plane

04/20/2020 by William Lam 2 Comments

After publishing my vSphere 7 with Kubernetes automation lab deployment script, I was looking at my NSX-T Edge code which leverages the vSphere VM Keystroke API to automate the joining of the the NSX-T Edge to the NSX-T Management Plane. This technique is used to avoid the need for SSH access to both NSX-T Edge and Manager which is the official VMware method as outlined in the documentation for configuring the Edge.

This is certainly unfortunate as most customers normally disable SSH by default and only enable it for troubleshooting/debugging purposes. As far as I know, there are no remote NSX-T APIs for configuring an NSX-T Edge that has been deployed outside of NSX-T Manager, which has its own implications.

I recently had a chance to revisit some research I had made a note of when I had first started working with NSX-T. While inspecting the NSX-T Edge OVA, I found several OVF properties that begin with mp which per the description was referring to the NSX-T Manager. At the time, I was not able to figure out which the required combination of keys and values. Taking a closer look and poking around the appliance and logs, I was able to finally figure out the correct combination which turned out to be easy, once you knew what it was expecting.

To help demonstrate this functionality, I have created a basic PowerCLI script edge-auto-join-nsxt-management-plane.ps1 which uses information from your already deployed NSX-T Manager to automatically deploy the desired number of NSX-T Edge(s) which will automatically join the NSX-T Management Plane upon initial setup.


The way this works is that the following four OVF properties must be filled as part of the NSX-T Edge deployment:

[Read more...] about NSX-T Edge OVF property to automatically join NSX-T Management Plane

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, NSX, OVFTool, PowerCLI Tagged With: NSX Edge, NSX-T, ovftool

Really cool updates with OVFTool 4.4 and support for vSphere 7

04/02/2020 by William Lam 5 Comments

vSphere 7 has officially GA'ed this morning and with folks starting to download ESXi and the vCenter Server Appliance, do not forget about all the supporting tools such as the latest PowerCLI 12.0 release which includes a number of enhancement as well as the various vSphere Management and Automation SDKs.

🚀 #vSphere7 is now GA 🚀

Start your downloads (RN’s still staging) & make sure to tune in to launch later this morning!

🔸VCSA RN:https://t.co/d6hr8ndAiG
🔹ESXi RN: https://t.co/d6hr8ndAiG

🔸VCSA Download: https://t.co/FbYluRI9te
🔹ESXi Download: https://t.co/bfHRAzzS43

— William Lam (@lamw) April 2, 2020

One of my most frequently used tools on a daily basis, some times even more than PowerCLI is OVFTool which is now at version 4.4 which officially supports vSphere 7 but it also includes a number of really awesome enhancements and bug fixes. 

  • OVFTool 4.4 Release Notes
  • OVFTool 4.4 Download

While looking over the OVFTool release notes, I noticed a few interesting tidbits that I thought was worth calling out:

OVF Tool now can upload disk files to the host in parallel, and download disk files from the host in parallel. OVA is unsupported. Parallelism is limited by the number of CPUs. See the --parallelThreads=N option in the OVF Tool User's Guide for details.

This is a most welcome feature for customers with extremely large VMs where upload and/or downloads of OVAs can take a considerable amount of time as only a single CPU thread is used. With this feature, you can now enable multiple CPU threads with the --parallelThreads parameter which should really with performance! Even for smaller size VMs, you can still benefit if you have additional CPU resources to allocate and something I will be using going forward!

For multi-disk virtual machines, OVF Tool now includes the --multiDatastore flag to specify datastore per disk. See the OVF Tool User's Guide for details.

This is another welcome feature for customers where you might have an OVA that contains multiple VMDKs and want to explicitly place them on specific datastore.

The ARM64 architecture on Linux is now supported.

Finally, I thought this was very interesting to see that OVFTool has been ported over to ARM64 for Linux which means we can run now run OVFTool on a Raspberry Pi or even an Amazon A1 EC2 Instance! This might come handy in the future and I wonder if OVFTool for ESXi would be the next logical step? 🙂

I highly recommend you check out the rest of the release notes as it contains many more enhancements and fixes, many of which I have reported from the community and/or by our customers. I think this is certainly one of the tools you can upgrade immediately as it has great backwards compatibility with older vSphere releases but you can also take advantage of all the new features mentioned above immediately. If there are other OVFTool improvements or enhancements you really would like to see, feel free to leave a comment along with the use case and I will past that on to Engineering.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, OVFTool, vSphere 7.0 Tagged With: ESXi 7.0, ovftool, vSphere 7

Quick Tip – Import OVF/OVA as VM Template using OVFTool 4.3 Update 1

01/29/2019 by William Lam 5 Comments

OVFTool is an extremely versatile command-line utility for importing and exporting Virtual Machines to and from the OVF/OVA format and it supports a number of VMware platforms including VMware Cloud on AWS (VMC), vSphere (vCenter Server and ESXi), Fusion, Workstation, Player and even vCloud Director (vCD).

An infrequent asks that I have seen from customers is the ability to deploy an OVF/OVA as a VM Template rather than just a Virtual Machine in a vSphere-based environment. OVFTool has had the ability to deploy to vAppTemplate for vCD-based environments, so it would make sense to also support vCenter Server VM Templates as well. Today, the workflow is a two-step process, deploy the OVF/OVA and then use the vSphere API to convert the VM to a VM Template.

With the latest OVFTool 4.3 Update 1 which was a minor release last year, we now have a new parameter called importAsTemplate which will allow customers to easily import an OVF/OVA directly into as a VM Template. Below is a quick sample using this new option and I am deploying to a VMC-based environment (see this article for requirements when using OVFTool with VMC)

1
2
3
4
5
6
7
8
9
10
ovftool.exe `
--acceptAllEulas `
--allowAllExtraConfig `
--name=PhotonOS-Template `
--datastore=WorkloadDatastore `
--net:None=sddc-cgw-network-1 `
--vmFolder=Templates `
--importAsTemplate `
C:\Users\william\Desktop\photon-hw13_uefi-3.0-49fd219.ova `
'vi://*protected email*@vcenter.sddc-a-b-c-d.vmwarevmc.com/SDDC-Datacenter/host/Cluster-1/Resources/Compute-ResourcePool/'

Once the upload has completed, we can take a look at our vSphere UI and see that our imported OVA been automatically been converted to a VM Template!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, OVFTool, VMware Cloud on AWS, vSphere Tagged With: ova, ovf, ovftool, VM Template

OVFTool and VMware Cloud on AWS

06/18/2018 by William Lam 1 Comment

Recently, I had noticed a number of questions that have come up regarding the use of OVFTool with the VMware Cloud on AWS (VMC) service. I had a chance to take a look at this last Friday and I can confirm that customers can indeed use this tool to import/export VMs into VMC whether they are from a vSphere/vCloud Director-based environment or simply OVF/OVAs you have on your desktop. Outlined below are the requirements and steps that you must have setup before you can use OVFTool with VMC. In addition, I have also include an OVFTool command snippet which you can use and adapt in your own environment.

Requirements:

  1. You must setup VPN connection between your onPrem environment and the Management Gateway on VMC (direct internet access to ESXi is not supported)
  2. Configure the VMC Firewall to allow access between your onPrem and VMC's ESXi host on port 443 (data transfer occurs at ESXi host level)
  3. Specify the Workload VM Folder as a target
  4. Specify the Compute-ResourcePool Resource Pool as a target
  5. Specify the WorkloadDatastore Datastore as a target

Instructions:

Step 1 - Create a Management VPN connection, please see the official documentation here for more details.

Step 2 - Create a two new Firewall Rules that allow traffic from your onPrem environment to both vCenter Server and ESXi host on port 443. vCenter Server will obviously be used for UI/API access and for ESXi, this is where the data traffic transfer will take place.


Step 3 - Construct your OVFTool command-line arguments and ensure you are using the VM Folder "Workloads", Resource Pool "Compute-ResourcePool" and Datastore "WorkloadDatastore" as your target destination since the CloudAdmin user will have restrictive privileges within VMC.

Here is an example command to upload an OVA from my desktop to the VMC vCenter Server:

1
2
3
4
5
6
7
8
ovftool.exe `
--acceptAllEulas `
--name=William-To-The-Cloud `
--datastore=WorkloadDatastore `
--net:None=sddc-cgw-network-1 `
--vmFolder=Workloads `
C:\Users\primp\desktop\William.ova `
'vi://*protected email*:*protected email*/SDDC-Datacenter/host/Cluster-1/Resources/Compute-ResourcePool/'

Note: OVFTool also supports the ability to specify a VM that is residing in your vSphere environment as a source, so you do not have to export it locally to your desktop and you can directly transfer it (your client desktop acting as a proxy) to VMC.

Here is the output from running the above command:


Once the upload has completed, you should see your new VM appear in your vSphere Inventory

 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, OVFTool, VMware Cloud on AWS, vSphere Tagged With: ovftool, VMC, VMware Cloud on AWS

Quick Tip – OVFTool 4.3 now supports vCPU & memory customization during deployment

05/29/2018 by William Lam 3 Comments

In addition to adding vSphere 6.7 support and a few security enhancements (more details in the release notes), the latest OVFTool 4.3 release has also been enhanced to support customizing either vCPU and/or Memory from the default configurations when deploying an OVF/OVA.

Historically, it was only possible to modify these values if you were deploying to a vCloud Director endpoint using either --numberOfCpus or --memorySize. When deploying to a vSphere endpoint, these settings were not applicable and users would need to perform an additional operation calling into the vSphere API using whatever automation tool of choice to reconfigure the VM after deployment. It was not the end of the world but also not ideal if you simply wanted to make a minor modification to the default OVF/OVA you were deploying. I definitely ran into this a few times where having this functionality would have been very useful and I know number of customers have also shared simliar feedback in the past.

I had asked whether it was possible to support this use case and it looks like we already had an internal feature request added to the OVFTool backlog and with some additional customer feedback, we were also able to get this enhancement added to the latest release.

The existing --numberOfCpus and --memorySize accepts a VM Identifier (usually the name) followed by the value, for example

--numberOfCpus:Foo=4

The VM Identifier is to help with vApp deployments where you may have an OVF/OVA which is composed of multiple VMs of which you would like to customize each VM with different values. To ensure we do not break backwards compatibility, this pattern has also been extended when deploying to a vSphere endpoint. Having said that, most customers that I have talked to who use OVFTool generally deploy an OVF/OVA that is comprised of a single VM. In this case, rather than specifying the name of the VM again which is derived from --name property, you can simply use the wildcard asterisk (*) to simply apply it to all VMs within the OVF/OVA.

Here is an example of deploying a PhotonOS OVA which is configured with a default of 1 vCPU and 2GB memory and as part of our deployment using OVFTool, we will increase both vCPU to 2 and memory to 4GB:

ovftool --acceptAllEulas --name=Foo --numberOfCpus:'*'=2 --memorySize:'*'=4096 photon-custom-hw11-2.0-304b817.ova 'vi://*protected email*@192.168.30.200/VSAN-Datacenter/host/VSAN-Cluster'

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, OVFTool Tagged With: memorySize, numberOfCpus, ovftool

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy