Using the vSphere API to remotely collect ESXi configuration file (esx.conf)

Last week we took a look at two new Automated solutions here and here that allows us to leverage vCenter Server and the vSphere APIs to remotely extract information that historically required logging in directly into an ESXi host. While working on the two scripts, I was reminded of another use that could also be really useful which builds on top of some information that I had shared back in 2012. ESXi provides a very basic file manipulation capability that is exposed as a simple HTTPS-based interface.

Here is a quick recap of the three URLs which can be accessed by opening a browser and logging into the ESXi host:


For the purpose of this article, we will be focusing on the first url endpoint /host and below is an example screenshot on some of the configuration files (46 in total) that you would be able to access using this interface.

One of the available ESXi configuration files that you access is the esx.conf file directly where it might be useful to periodically capture the state of this file for either auditing or troubleshooting purposes.

Note: Although esx.conf does contain some amount of the ESXi configurations, it does not represent the full state of the ESXi host. If you wish to perform periodic full backups of your ESXi host (which includes esx.conf by default among other files), there is a vSphere API for this by using the HostFirmwareSystem and the BackupFirmwareConfiguration() method.

Applying the same technique as I have described here, we can easily retrieve the esx.conf for a specific ESXi host being managed by vCenter Server without needing directly login to the ESXi host or worse connecting via SSH. I have created a PowerCLI script called Get-Esxconf.ps1 which just accepts a VMHost object.

Here is an example of how you would use the function and screenshot below of the output:

$esxConf = Get-VMHost -Name "esxi-1" | Get-Esxconf

If you are interested in a specific key within the esx.conf configuration file, we further process the output. The following snippet below searches for the following key /system/uuid and will return the value as it iterates through the esx.conf output.

Hopefully this gave you an idea of just one of the many use cases that can now be enabled through the use of the vSphere API and this ESXi interface. Here are just a few other use cases that I can think of on the top of my mind that could come in handy:

  • Managing ESXi SSH public/private keys, we have mostly been using httpGet, but you can also use an httpPut to upload these files without needing to go to each and every ESXi host
  • Replacing Custom SSL Certificates if you are not using VMCA, you can also use an httpPut request to upload these files (you will need to restart hostd or reboot the host for the new SSL Certificates to go into effect)
  • Quickly access the vpxa.cfg (vCenter Server agent) configuration file for troubleshooting purposes

Test driving ContainerX on VMware vSphere

Over the weekend I was catching up on some of my internet readings, one of which is Timo Sugliani's excellent weekly Tech Links (highly recommend a follow). In one of his non-VMware related links (which funny enough is related to VMware), I noticed that the recent Container startup ContainerX has just made available a free version of their software for non-production use. Given part of the company's DNA included VMware, I was curious to learn more about their solution and how it works, especially as it relates to VMware vSphere which is one of the platforms it supports.

For those not familiar with ContainerX, it is described as the following:

ContainerX offers a single pane of glass for all your containers. Whether you are running on Bare Metal or VM, Linux or Windows, Private or Public cloud, you can view your entire infrastructure in one simple management console.

In this article, I will walk you through in how to deploy, configure and start using ContainerX in a vSphere environment. Although there is an installation guide included with the installer, I personally found the document to be a little difficult to follow, especially for someone who was only interested in a pure vSphere environment. The mention of bare-metal at the beginning was confusing as I was not sure what the actual requirements were and I think it would have been nice to just have a section that covered each platform from start to end.

In any case, here are high level steps that are required in setting up ContainerX for your vSphere environment:

  1. Deploy an Ubuntu (14.01/14.04) VM and install the CX Management Host software
  2. Deploy the CX Ubuntu OVA Template into the vSphere environment that will be used by the CX Management Host
  3. Configure a vSphere Elastic Cluster using the CX Management Host UI
  4. Deploy your Container/Application to your vSphere Elastic Cluster


  • Sign up for the free ContainerX offering here (email will contain a download link to CX Management Host Installer)
  • Access to a vSphere environment w/vCenter Server
  • Already deployed Ubuntu 14.01 or 14.04 VM (4 vCPU, 8GB vMEM & 40GB vDISK) that will be used for the CX Management Host

CX Management Host Deployment:

Step 1 - Download the CX Management Host installer for your OS desktop platform of choice. If you are using the Mac OS X installer, you will find that the fails to launch as it is not signed from an identified developer. You will need to change your security settings to allow an application which was downloaded from "anywhere" to be opened, which is a shame.

Step 2 - Accept the EULA and then select the "On Preconfigured Host" option which expects you to have a pre-installed Ubuntu VM to install the CX Management Host software. If you have not pre-deployed the Ubuntu VM, stop here and go perform that step and then come back.

Step 3 - Next, provide the IP Address/hostname and credentials to the Ubuntu VM that you have already pre-installed. You can use the "Test" option to verify that either the SSH password or private key that you have provided is functional before proceeding further in the installer.

Step 4 - After you click "Continue", the installer will remotely connect to your Ubuntu VM and start the installation of the CX Management Host software. This takes a few minutes with progress being displayed at the bottom of the screen. If the install is successful, you should see the "Install FINISHED" message.


Step 5 - Once the installer completes, it will also automatically open a browser and take you to the login screen of the CX Management Host UI interface (https://IP:8085). The default credentials is admin/admin

At this point, you have successfully deployed the CX Management Host. The next section will walk you through in setting up the CX Ubuntu Template which will be used to deploy your Containers and Applications by the CX Management Host.

Preparing the CX Ubuntu Template Deployment:

Before we can create a vSphere Elastic Cluster (EC), you will need to deploy the CX Ubuntu OVA Template which will then be used by the CX Management Host to deploy CX Docker Hosts to run your Containers/Applications. When I had originally gone through the documentation, there was a reference to the CX Ubuntu OVA but I was not able to find a download URL anywhere including going through the ContainerX's website. I had reached out to the ContainerX folks and they had updated KB article 20960087 to provide a download link, appreciate the assistance over the weekend. However, it looks like their installation documentation is still missing the URL reference. In any case, you can find the download URL below for your convenience.

Step 1 - Download the CX Ubuntu OVA Template ( and deploy (but do NOT power it on) using the vSphere Web/C# Client to the vCenter Server environment that ContainerX will be consuming.

Note: I left the default VM name which is cx-ubuntu as I am not sure if it would mess up with the initial vSphere environment discovery later in the process. It would be good to know if you could change the name.

Step 2 - Take a VM snapshot of the powered off CX Ubuntu VM before powering it on.


Creating a vSphere Elastic Cluster (EC) to ContainerX:

Step 1 - Click on the "Quick Wizard" button at the top and select the "vSphere Cluster" start button. Nice touch on the old school VMware logo 🙂

Step 2 - Enter your vCenter Server credentials and then click on the "Login to VC" button to continue.

Step 3 - Here you will specify the number of CX Docker Hosts and the compute, storage, and networkings resources that they will consume. The CX Docker Hosts will be provisioned using VMware Linked Clones based off of our CX Ubuntu VM Template that we had uploaded earlier. If you had skipped this step, you will find that there is not a drop down box and you will need to perform that step first before you can proceed further.


Note: It would have been nice if the CX Ubuntu VM was not detected, that it would automatically prompt you to deploy it without having to go back. I did not even realize this particular template was required since I was not able to find the original download link in any of the instructions.

Step 4 - An optional step, but you also have the option to create what is known as Container Pools which allow you to set both CPU and Memory limits (supports over-commitment) within your EC. It is not exactly clear how Container Pools work but it sounds like these are being applied within the CX Docker Hosts VMs?

Step 5 - Once you have confirmed the settings to be used for your vSphere EC, you can then click Next to being the creation. This process should not take too long and once everything has successfully been deployed, you should see a success message and a "Done" button which you can click on to close the wizard.

Step 6 - If we go back to our CX Management UI home page, we should now see our new vSphere EC which in my example is called "vSphere-VSAN-Cluster". There is some basic summary information about the EC, including number of Container Pools, Hosts and their utilization. You may have also noticed that there are 12 Containers being displayed in the UI which I found a bit strange given I have not deployed anything yet. I later realized that these are actually CX Docker Containers running within the CX Docker Hosts which I assuming is providing communication back to the CX Management Host. I think it would be nice to separate these numbers to reflect "Management" and actual "Application" Containers, the same goes for resource utilization information.


Deploying a Container on ContainerX:

Under the "Applications" tab of your vSphere EC, you can deploy either a standalone Docker Container or some of the pre-defined Applications that have been bundled as part of the CX Management Host.

We will start off by just deploying a very simple Docker Container. In this example, I will select my first ContainerPool-1 and then select the "A Container" button. Since we do not have a repository to select a Container to deploy, click on the "Launch a Container" button towards the top.

Note: I think I may have found a UI bug in which the Container Pool that you select in the drop down is not properly being displayed when you go deploy the Container or Application. For example, if you pick Container Pool 1, it will say that you are about to deploy to Container Pool 2. I found that you had to re-select the same drop down a second time for it to properly display and whether this is merely a cosmetic bug or its actually using the Container Pool that I did not specify.

Step 1 - Specify the Docker Image you wish to launch, if you do not have one off hand, you can use the PhotonOS Docker Container (vmware/photon) and specify a Container name. You can also add additional options using the advanced settings button such as environmental variables, network ports, Docker Volumes, etc. For this example, we will keep it simple, go ahead and click on "Launch App" button to deploy the Container.

Step 2 - You should see that our PhotonOS Docker Container started and then shortly after exited, not a very interesting demo but you get the idea.

Note: It would be really nice to be able to get the output from the Docker Container, even running a command like "uname -a" did not return any visible output that I could see from the UI.

Deploying an Application on ContainerX:

The other option is to deploy a sample application that is pre-bundled within the CX Management Host (I assume you can add your own application as it looks to be just a Docker Compose file). Select the Container Pool from the drop down that you wish to deploy the application and then click on the "An Application" button. In our example, we will deploy the WordPress application.

Step 1 - Select the application you wish to deploy by click on the "Power" icon.

Step 2 - Give the application a name and then click on the "Launch App" to deploy the application.

Step 3 - The deployment of the application can take several minutes, but one completed, you should see in the summary view like the one shown below. You can also find the details of how to reach the WordPress application that we just deployed by looking for the IP Address and the external port as highlighted below.

Step 4 - To verify that our WordPress application is working, go ahead and open a new browser and specify the IP Address and the port shown in the previous step and you should be taken to the initial WordPress setup screen.

If you need to access the CX Docker Hosts whether it is for publishing Containers/Applications by your end users or for troubleshooting purposes, you can easily access the environment information under the "Pools" tab. There is a "Download access credentials" which contains zip file containing platform specific snippets of the CX Docker hosts information.

Since I use a Mac, I just need to run the script and then run my normal "Docker" command (this assumes you have the Docker Beta Client for Mac OS X, else you will need a Docker Client). You can see from the screenshot below the three Docker Containers we had deployed earlier.



Having only spent a short amount of time playing with ContainerX, I thought it was a neat solution. The installation of the CX Management Host was pretty quick and straight forward and I was glad to see a multi-desktop OS installer. It did take me a bit of time to realize what the actual requirement was for just a pure vSphere environment as mentioned earlier, perhaps an end-to-end document for vSphere would have cleared all this up. The UI was pretty easy to use and intuitive for the most part. I did find it strange not being able to edit any of the configurations a bit annoying and ended up deleting and re-creating some of the configurations. I would have liked an easier way to map between the Container Pools (Pools tab) and their respective CX Docker Hosts without having to download the credentials or navigate to anther tab. I also found in certain places that selection or navigation of objects was not very clear due to the subtle transition in the UI which made me think there was a display bug.

I am still trying to wrap my head around the Container Pool concept. I am not sure I understand the benefits of it or rather how the underlying resource management actually works. It seems like today, it is only capable of setting CPU and Memory limits which are applied within the CX Docker Host VMs? I am not sure if customers are supposed to create different sized CX Docker Host VMs? I was pretty surprised that I did not see more use of the underlying vSphere Resource Management capabilities in this particular area.

The overall architecture of ContainerX for vSphere looks very similiar to VMware's vSphere Integrated Containers (VIC) solution. Instead of a CX Docker Host VM, VIC has a concept of a Virtual Container Host (VCH) which is backed by a vSphere Resource Pool. VIC creates what is known as a Container VM that only contains the Container/Application running as VM, rather than in a VM. These Container VMs are instantiated using vSphere's Instant Clone capability from a tiny PhotonOS Template. Perhaps I am a bit biased here, but in addition to providing an integrated and familiar interface to each of the respective consumers: vSphere Administrators (familiar VM construct, leveraging the same set of tools with extended Docker Container info) and Developers (simply accessing the Docker endpoint with the tools they are already using), the other huge benefit of the VIC architecture is that it allows the Container VMs to benefit from all the underlying vSphere platform capabilities. vSphere Administrators can apply granular resource and policy based management on a per Container/Application basis if needed, which is a pretty powerful capability if you ask me. It will be interesting to see if there will be deeper integration from a management and operational standpoint in the future for ContainerX. 

All in all, very cool stuff from the ContainerX folks, looking forward to what comes next. DockerCon is also this week and if you happen to be at the event, be sure to drop by the VMware booth as I hear they will be showing off some pretty cool stuff. I believe the ContainerX folks will also be at DockerCon, so be sure to drop by their booth and say hello.

Using the vSphere API to remotely collect ESXi esxcfg-info

Using the same technique as I have described here, you can now also use the vSphere API to connect to vCenter Server to remotely collect esxcfg-info from ESXi hosts without having to SSH'ing to each and every single host. Historically, the esxcfg-* commands were only available in the classic ESX Service Console (COS) and the ESXi Shell. As part of the ESXi transition, VMware has converted all the commands over to the vSphere API which means that you no longer needed to run those local CLIs commands to manage or configure your ESXi hosts like you used to with classic ESX.

The only exception that still exists today is the esxcfg-info command, which still contains a lot of useful information, for some of which is not currently in the vSphere API today. Similiar to the vm-support.cgi script, there is also an esxcfg-info.cgi script which I had blogged about here back in 2011. To output the esxcfg-info, simply open a web browser and specify the following URL with the Hostname/IP Address of your ESXi host:

Once you have authenticated with a valid user, you will see that the output matches the output if you were to manually run esxcfg-info command on the ESXi Shell.

Instead of the raw output that you are all probably familiar with, you can also format the output using XML simply by appending ?xml to the end of the URL:

With the second formatted option, we can now easily retrieve the result and store that into an XML object for processing using any one of our favorite scripting/programming languages. In the previous article, I demonstrated the use of the vSphere API method AcquireGenericServiceTicket() using a pyvmomi (vSphere SDK for Python) script. In this example, I will demonstrate the exact same use of the vSphere API but now leveraging PowerCLI. I have created a script called Get-Esxcfginfo.ps1 which connects to a vCenter Server and requests a session ticket to a specific ESXi host's esxcfg-info.cgi URL and that will then return us a one time HTTP request to connect to the ESXi host to retrieve the requested information.

Here is an example on how to use the command which will return the XML output which would then require further processing of the XML:

$xmlResult = Get-VMHost -Name "" | Get-Esxcfginfo

I have also included an example of how to parse the XML return in the script itself. As you can see from the screenshot below, I am extracting the Device Name, Vendor Name & Vendor ID from the esxcfg-info output.


Pretty cool huh? Stay tuned for one more blog post which I will show you another way in which you can make use of this vSphere API!

Using the vSphere API to remotely generate ESXi performance support bundles

This is a follow-up post from my previous article on how to run a script using a vCenter Alarm action in the vCenter Server Appliance (VCSA). What I had demonstrated was using a pyvmomi (vSphere SDK for Python) script to be triggered automatically to generate a VMware Support Bundle for all ESXi hosts for a given vSphere Cluster. The one caveat that I had mentioned in the blog post was that the solution slightly differed from the original request which was to create an ESXi performance support bundle. However, the vSphere API only supports the creation of generic VMware Support Bundle which may not be as useful if you are only interested in collecting more granular performance stats for troubleshooting purposes.

After publishing the article, I had thought about the problem a bit more and realized there is still a way to solve the original request. Before going into the solution, I wanted to quickly cover how you can generate an ESXi Performance Support Bundle which can be done either directly in the ESXi Shell using something like the following:

vm-support -p -d 60 -i 5 -w /vmfs/volumes/datastore1

or you can actually use a neat little trick which I had blogged about here back in 2011 where you can simply open a web browser and run the following:

Obviously, the first option is not ideal as you would need to SSH (generally disabled for good security practices) to each and every ESXi host and then manually run the command and copy the support bundle off of each system. The second option still requires going to each and every ESXi host, however it does not require ESXi Shell or SSH access. This is still not ideal from an Automation standpoint, especially if these ESXi hosts are already being managed by vCenter Server.

However, the second option is what gave me the lightbulb idea! I had recalled a couple of years back that I had blogged about a way to efficiently transfer files to a vSphere Datastore using the vSphere API. The solution leveraged a neat little vSphere API method called AcquireGenericServiceTicket() which is part of the vCenter Server sessionManager. Using this method, we can request a ticket for a specific file with a one time HTTP request to connect directly to an ESXi host. This means, I can connect to vCenter Server using the vSphere API, retrieve all ESXi hosts from a given vSphere Cluster and request a one time ticket to remotely generate an ESXi performance support bundle and then download it locally to the VCSA (or any other place that you can run the pyvmomi sample).

Download the pyvmomi script:

Here is the sample log output when triggering this script from vCenter Alarm in the VCSA:

2016-06-08 19:29:42;INFO;Cluster passed from VC Alarm: Non-VSAN-Cluster
2016-06-08 19:29:42;INFO;Creating directory /storage/log/esxi-support-logs to store support bundle
2016-06-08 19:29:42;INFO;Requesting Session Ticket for
2016-06-08 19:29:42;INFO;Waiting for Performance support bundle to be generated on to /storage/log/esxi-support-logs/vmsupport-
2016-06-08 19:33:19;INFO;Requesting Session Ticket for
2016-06-08 19:33:19;INFO;Waiting for Performance support bundle to be generated on to /storage/log/esxi-support-logs/vmsupport-

I have also created a couple more scripts exercising some additional use cases that I think customers may also find useful. Stay tuned for those additional articles later this week.

UPDATE (06/16/16) - There was another question internally asking whether other types of ESXi Support Bundles could also be generated using this method and the answer is yes. You simply just need to specify the types of manifests you would like to collect, such as HungVM for example.

To list the available Manifests and their respective IDs, you can manually perform this operation once by opening browser specifying the following URL:

Screen Shot 2016-06-16 at 5.38.08 AM
To list the available Groups and their respective IDs, you can specify the following URL:

Screen Shot 2016-06-16 at 5.45.56 AM
Here is an example URL constructed using some of these params:

The following VMware KB 2005715 may also be useful as it provides some additional examples on using these additional parameters.

Quick Tip - How to create a Windows 2016 ISO supporting EFI boot w/o prompting to press a key

We had a question this week from a customer that was looking automate the installation of the latest Windows 2016 Tech Preview running on vSphere. They were booting the ISO using the EFI firmware and found that they would always be prompted with the "Press any key to boot from CD or DVD ...." message as shown in the screenshot below.

This obviously was not ideal from an Automation standpoint and they were looking to see if there was a solution. Thanks to one of our EFI experts at VMware, Darius Davis, who noted that this has been a known Microsoft issue for some time now and there is even a TechNet article here describing the issue since Windows Server 2008. The only workaround to solve this problem was to re-master the Windows ISO to use the efisys_noprompt.bin instead of the default efisys.bin file.

Given this was not something I had done before and I was curious to see how difficult it really was. I did some digging around the Internet and to my surprise, it was actually pretty hard to find a straight forward article that walks you through on re-mastering a recent release of Windows yet alone on how to enable this EFI not prompt option. I was about to give up when I stumbled onto an article by Johan Arwidmark who outlines the steps to create a Windows 10 ISO from the Windows 10 ESD installer. Given the title of his article, I am guessing that others have also had a hard time finding the correct instructions and tools required to re-master an existing Windows ISO. Johan had a nice Powershell script that converts the Windows 10 ESD installer to an ISO and then using that image to re-master the final ISO.

Given that both Windows 2016 and even Windows 10 is available as an ISO download, we did not need the entire script. Only the last couple of lines of the script is what I was really interested in. Below is a modification of Johan's script and you can see that instead of using the efisys.bin file, we will reference the efisys_noprompt.bin which will remove the prompting when booting the installer using the EFI firmware.

To use the script, you will need to download and install the Windows Automation Installation Kit (AIK) which will provide you with the oscdimg.exe utility that will be used to re-master a new ISO. You will also need a copy of Windows 2016 or any other Windows ISO that you plan to re-master and disable the EFI prompting and lastly, there are three variables in the script that you will need to update.

  • ISOMediaFolder = This is the full path to either the extracted or mounted Windows ISO
  • ISOFile = This is the path to re-master ISO
  • PathToOscdimg = This is the path to the location of oscdimg.exe (you can use the default it if you install AIK under C:\)

The re-mastering process should not take too long. Once the re-mastered ISO has been created, you can now boot up your VM and you should see that you are no longer prompted with the message but rather booting straight into the installer as seen in the screenshot below. Hopefully this will help others who might have a need for this in the future as I know it as not as easy to find as I had hoped for initially.