Recently, I had noticed a number of questions that have come up regarding the use of OVFTool with the VMware Cloud on AWS (VMC) service. I had a chance to take a look at this last Friday and I can confirm that customers can indeed use this tool to import/export VMs into VMC whether they are from a vSphere/vCloud Director-based environment or simply OVF/OVAs you have on your desktop. Outlined below are the requirements and steps that you must have setup before you can use OVFTool with VMC. In addition, I have also include an OVFTool command snippet which you can use and adapt in your own environment.
You must setup VPN connection between your onPrem environment and the Management Gateway on VMC (direct internet access to ESXi is not supported)
Configure the VMC Firewall to allow access between your onPrem and VMC's ESXi host on port 443 (data transfer occurs at ESXi host level)
Specify the Workload VM Folder as a target
Specify the Compute-ResourcePool Resource Pool as a target
Step 1 - Create a Management VPN connection, please see the official documentation here for more details.
Step 2 - Create a two new Firewall Rules that allow traffic from your onPrem environment to both vCenter Server and ESXi host on port 443. vCenter Server will obviously be used for UI/API access and for ESXi, this is where the data traffic transfer will take place.
Step 3 - Construct your OVFTool command-line arguments and ensure you are using BOTH the VM Folder "Workloads" and Resource Pool "Compute-ResourcePool" as your target destination since the CloudAdmin user will have restrictive privileges within VMC.
Here is an example command to upload an OVA from my desktop to the VMC vCenter Server:
Note: OVFTool also supports the ability to specify a VM that is residing in your vSphere environment as a source, so you do not have to export it locally to your desktop and you can directly transfer it (your client desktop acting as a proxy) to VMC.
Here is the output from running the above command:
Once the upload has completed, you should see your new VM appear in your vSphere Inventory
It is hard to believe this Fall will be my 7th year at VMware! Looking back, it has absolutely been an amazing ride.
For the past six years, I have been very fortunate to have been part of an amazing team of solutions architects working within R&D as part of the Integrated Systems Business Unit (ISBU) at VMware. In the early days, we were known as the Integration Engineering team, most well known for designing, operating and running the original VMware Hands on Lab at VMworld which used to also include on-premises hardware! This team also served as Customer internally for a number of VMware products. In addition, this team also ran the customer on-site Alphas and Betas for vSphere. I still remember building the very first vPod for what eventually became vSphere 6.0 🙂
Over the years, the team had built up a wealth of knowledge in how to build, run and operate the VMware SDDC at scale. A large part of the team had came from either the field or from a customer with past alumnis including Duncan Epping, Cormac Hogan & Paudie O'Riordan to name a few. We wanted to bring these learnings and best practices to our customers and the VMware Validated Design (VVD) was born. What customers most appreciate about the VVDs is not just the Day 0 guidance, but also the prescriptive Day 2 operational guidance (patching/upgrading, maintenance window scheduling, monitoring, disaster recovery, etc) which is not something VMware had historically provided. Customers can then consume the VVD in several ways: build it yourself (DIY), PSO engagement including Automation or through VMware Cloud Foundation (VCF) which codifies the VVD into an integrated hardware/software offering. I am very proud of what the team has built over the years, it was not an easy road and not compromising on our design principles has paid dividends as we continue see the VVD adoption accelerating in our customers environments as the fastest way to deliver a VMware SDDC.
For the last couple of years, I had also been driving an internal project within ISBU called the Enterprise Readiness Initiative (ERi). This effort is focused on ensuring that we have a consistent set of capabilities across Lifecycle, Certificate & Configuration Management for the VMware SDDC. These capabilities must also be exposed programmatically for our customers and partners to consume. One example is the recent Install/Upgrade vCenter REST APIs that was made available as part of the vSphere 6.7 release. There is still plenty more work to be done including other ERi workstreams, but the team has made some great progress and hopefully you will be seeing more of the results in the near future.
As you can see, there is no shortage of oppournitites at VMware and being able to work with so many talented and passionate colleague to help solve our customer challenges is what I wake up every day for. I wanted to take a moment and thank one of the best managers I have had the pleasure of reporting to, Phil Weiss. Not only has he been very supportive of my career development, but has also been a mentor to me over the years and I have learned a tremendous amount from him and about myself. Phil is also occasionally involved when I get called into the lawyers office 😉 I also wanted to extend my thanks to both John Gilmartin (ISBU GM) and Jayanta Dey (ISBU VP of Engineering) who were both extremely supportive of my decision to move on.
This question came up last week asking for a programmatic method to identify whether NSX-V or NSX-T is deployed in your environment. With NSX-V, vCenter Server is a requirement but for NSX-T, vCenter Server is not a requirement, especially for multi-hypervisor support. In this post, I will assume for NSX-T deployments, you have configured a vCenter Compute Manager.
Both NSX-V and NSX-T uses the ExtensionManager API to register themselves with vCenter Server and we can leverage this interface to easily tell if either solutions are installed. NSX-V uses the com.vmware.vShieldManager string to identify itself and NSX-T uses the com.vmware.nsx.management.nsxt string to identify itself.
Here is a quick PowerCLI snippet that demonstrates the use of the vSphere API to check whether NSX-V or NSX-T is installed and provides the version shown in the registration:
While working on my Getting started with VMware Pivotal Container Service (PKS) blog series awhile back, one of the things I was also working on was some automation to help build out the required infrastructure NSX-T (Manager, Controller & Edge), Nested ESXi hosts configured with VSAN for the Compute vSphere Cluster and Pivotal Ops Manager. This was not only useful for my own learning purposes, but that I could easily rebuild my lab if I had messed something up and allowed me to focus more on the PKS solution rather than standing up the infrastructure itself.
To be honest, I had about 95% of the script done but I was not able to figure out one of the NSX-T APIs and I got busy and had left the script on the back burner. This past weekend while cleaning out some of my PKS research documents, I came across the script and funny enough, in about 30minutes I was able to solve the problem which I was stuck for weeks prior. I just finished putting the final touches on the script along with adding some documentation. Simliar to my other vGhetto Lab Automation scripts, I have created a Github repo vGhetto Automated PKS Lab Deployment
UPDATE (06/19/18) - I have just updated the script to also include the deployment and configuration of the PKS components (Ops Manager, BOSH Director, Harbor & Stemcell). The script by default will now configure everything end-2-end and you will have a fully functional PKS environment that you can start playing around with. For complete details, please see the Github repo which has the updated requirements and documentation. Below is a screenshot of the PKS deployment and configuration which requires the use of the Ops Manager CLI (OM).
The script will deploy the following components which will be placed inside of a vApp as shown in the screenshot below:
NSX-T Controller x 3 (though you technically only need one for lab/poc purposes)
Nested ESXi VMs x 3 (VSAN will be configured)
The script follows my PKS blog series and automates Part 3 (NSX-T) and the start of Part 4 (Ops Manager deploy), please refer to these individual blog posts for more information. The goal of the script is to enable folks to jump right into the PKS configuration workflows and not have to worry about setting up the actual infrastructure that is needed for PKS. Once the script has finished, you can jump right into Ops Manager and start your PKS journey.
Here is a sample execution of the script which took ~29 minutes to complete.
The full requirements for using the script be found on the Github repo and below are the software versions that I had used to deploy and configure PKS:
Today, the vCenter REST (vSphere Automation) APIs currently does not support the use of VM Storage Policies when relocating (vMotion, Cross Datacenter vMotion & Storage vMotion) or cloning an existing Virtual Machine. Customers have provided feedback that this is something that they would like to see get added to the current REST APIs and while this is being looked at, there were a couple of open questions from Engineering.
The following 2-question survey below is to help us understand what the "default" behavior should be when a Virtual Machine is being relocated or cloned within a vCenter Server and a VM Storage Policy is NOT specified when using the APIs. The reason for this is that our existing APIs for relocate and clone today are very flexible and not everything needs to be specified as part of the relocate or clone API specification. However, due to this flexibility, you may observe different behaviors and we would like to understand what the default behavior should be when some of these paraemters are not specified. In the case where you want to be explicit, you can always specify the VM Storage Policy, but the survey is to understand when it is not specified.