Automating post-configurations for both PSC & VCSA 6.0u1 using appliancesh

In vSphere 6.0, we introduced a new command-line option to allow you to automate both the deployment and upgrade of a vCenter Server Appliance (VCSA) and Platform Services Controller (PSC) using a simple JSON configuration file. This has been a very popular request from customers and one that I have been asking for some time now and was glad to see it was finally made available with the VCSA. One thing that was still missing from an Automation standpoint was being able to some basic post-configurations after the initial deployment. Common operations such as adding additional user accounts, configuring SNMP for monitoring or adding proxy server were available but had to be done interactively and manually.

In vSphere 6.0 Update 1, an enhancement was made to the appliancesh interface which will now allow customers to automate the post-configurations of either a VCSA or PSC by simply re-directing a series of appliancesh commands within a file using SSH. Although SSH may not be ideal for all customers and having a programmatic interface via an API is ultimately where we want to get to; This at least allows customers to automate the end-to-end deployment of both the VCSA and PSC as well as covering any additional post-configurations that might be required to stand up a vSphere environment.

To make use of this feature, you simply create a file that contains the list of appliancesh commands that you wish to run on either the VCSA and/or PSC. Here is an example configuration called psc.config (you can name it anything you want):

Once you have saved the configuration file, you simply SSH to either your VCSA or PSC and re-direct the configuration file by running the following command:

ssh < psc.config

Once authenticated, the series of appliancesh commands will be executed and then you will be automatically logged off as seen in the screenshot below.
If you have any feedback in this particular area, please leave a comment as I know both PM/Engineering are interested in hearing your thoughts and what you might want to see in the future in terms of post-configuration of the VCSA and PSC.

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting ( which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity :)

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting to false:

Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

ESXi 6.0 on Apple Xserve 3,1

A couple of months ago, I shared a guest blog post from one of my readers John Clendenen who was able to get ESXi 6.0 running on an Apple Xserve 2,1. At the end of that article, it was hinted that John was also looking into getting ESXi 6.0 running on an Apple XServe 3,1 and you can the details below after several months of investigation.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

*** This is a guest blog post from John Clendenen ***

First an update on my Xserve 2,1’s. I had them running for over 100 days without any issue! However, now that I have the 3,1 working reliably, it is time that I part ways with my Xserve 2,1’s. I currently have them up on eBay. Here is the link:

Anyway, onto the Xserve 3,1.


I came across an Xserve 3,1 on eBay about a year ago. It was badly photographed, and the seller didn’t really know what he/she had. It wasn’t getting much attention, so I thought I might get it cheap. I ended up paying $500 for it which I felt ok about, but not great.

When it arrived, it had no processors, heatsinks or airflow duct. I immediately messaged the seller, and was able to get $350 refunded to me. I found the missing parts for under $100 over the next few weeks, and developed an intimate understanding of the Xserve 3,1 hardware.

At this point, I had no familiarity with vSphere at all. I was running OS X server and virtualizing a few services in Fusion. It was only through researching the Xserve 3,1 to find the missing hardware that I discovered that VMware had supported once as an ESXi 5 host. This made me wonder if it might still be possible to run ESXi on it, despite it no longer being supported.

I have found, after a considerable time investment, that the Xserve 3,1 can run ESXi 6, just as I found the Xserve 2,1 can run ESXi 6. However, unlike the Xserve 2,1, the Xserve 3,1 took months of troubleshooting before I had it running as a reliable ESXi host.


As it turns out, despite how much time it took me to get it working, there are only 2 serious issues with the Xserve 3,1 running ESXi 6. The first is somewhat specific to my configuration, but the second will be relevant to all configurations.

The first issue concerns booting into ESXi on a headless Xserve 3,1. The issue is limited to configurations where ESXi is booting from a drive installed in the optical bay (my original configuration). I have since changed my configuration and swapped the ESXi boot drive from the optical bay to the first hard drive bay. I have had no issue since I made this change.

For my configuration, I used an OWC bracket to replace the optical drive with an SSD. I installed ESXi onto it without issue. During installation, it was connected to monitor, keyboard, etc. I ran some VM’s on it to make sure it worked, and there were zero issues. I was relieved! So, I put it in the rack, wired it up and turned it on. Nothing. The Xserve lit up, and it was clear that it got through POST, but ESXi was clearly not booting.

Long story short, when no monitor is plugged into the Xserve 3,1, it will not automatically boot into ESXi if the boot drive is installed in the optical bay. The Xserve boot options can even be programmed through the front panel, but no configuration will make it reliably boot from the optical bay when a hard drive is installed. It is truly baffling, and if anyone has some insight here, or if it is a problem specific to my particular Xserve, I would love to know.

The solution, in my case, was to plug a keyboard into the Xserve, and hold down option for a few minutes while it boots (bringing up the boot options). Once all LED activity has normalized and the fan has settled down, I released the option key and pushed the arrow buttons. I think you only need to push the up button, but I always just pressed all of them to be sure. Then I pressed enter, and ESXi will boot. I have since simply swapped the boot drive to the first drive bay. Ideally, I’d have the other drives in the hot-swap bays, but I felt it was too much trouble to keep it in the optical bay.

The second issue concerns the onboard NIC. Once I had ESXi up and running, everything worked fine for anywhere between a few hours and 2 days, after which the Xserve 3,1 host would disappear from the VCSA and become completely unresponsive (no ping/ssh/etc). The length of time before failure made this issue especially difficult and time consuming to diagnose.

After nearly a month of frustration and disappointment, I determined that ESXi actually continued to run, but all network connectivity was ceasing. The only solution I have found is to install a 3rd party NIC and completely avoid using the onboard NIC. Even in standby, the onboard NIC can cause problems, but when it is completely unused, both for management and VM traffic, it no longer causes any problems.

This has been superficially improved with the last update, but use of the onboard NIC should still be completely avoided. The ESXi host will remain accessible via the VCSA, but the network management will become grayed out after a day or so. I suspect this is a driver issue in ESXi, but I really do not know.


Beyond these 2 issues, I have had no problems. Since the last update, even the performance and hardware status tabs are functional. RDM is not available, but not recommended in the first place. The Apple RAID backplane will not be recognized, but this was even the case in ESXI 5 when it was officially supported by VMware.

I hope that my efforts here will save others a lot of time and frustration. I think that for a lot of IT infrastructures, ESXi on an Xserve might make sense. It can run non-critical OS X services (which are hopefully the only kind of services you’re trying to run in OS X).



  •      Completely avoid using the on-board NIC. Silicom NIC’s are recommended.
  •      Find a standard backplane. The RAID backplane is useless in ESXi.
  •      A 2.5” drive can be installed in the optical bay, but booting from it is problematic


The Xserve 3,1 with the Silicom NIC installed

The 6 ports are a tight squeeze, but they just fit. My other 2 EXSi hosts are Supermicro Nodes, also with Silicom NIC’s and I had to use a Dremel to grind off part of the chassis to make all the ports accessible. But the Xserve works out of the box.

The OWC SSD “Data Doubler” bracket in the optical bay. Booting from here is a pain, but putting an additional SSD here works great for host caching.

The standard backplane is difficult to find, but is a great asset for vSphere. It is easy to distinguish it from the RAID backplane which would have a heat sink here.

There are no complications during installation/initial configuration.

Apologies for not having a longer uptime. I updated to ESXi6.0U1a 12 days ago, but I’ve had the Xserve 3,1 up for months. If something changes, I will post an update here, but I am confident that the system is stable.

This is the final stage of my home lab. The Xserve 3,1 is 1 of 3 ESXi hosts. These are accompanied by a primary domain controller (Samba4), a media server (Emby) and a home-grown NAS (Centos7). Networking in the back is Ubiquiti. I use this lab to prototype production environments for clients, and of course to run my home media services :-)

Neat way of installing or updating any VIB using just the ESXi Embedded Host Client

A couple of months back I had tossed out an idea on Twitter asking if others would like to see an automatic update mechanism built into the ESXi Embedded Host Client which would allow users to easily update to newer releases of the Fling versus the current method which requires copying the VIB and then running command in the ESXi Shell.

To no surprise, the feedback was an astounding yes! Literally within a couple of hours, Etienne Le Sueur, one of the two VMware Engineers working on the Fling shared a screenshot that demonstrated that this would possible. The first release of this feature would simply ask for the URL to the updated ESXi Embedded Host Client VIB and this was included in the v3 release of the Fling.

One additional tidbit that Etienne had shared was that the way this feature was implemented, it was not only limited to Embedded Host Client VIB but you could do this for any ESXi VIB. This is done by using the vSphere API and calling into the InstallHostPatchV2_Task() method which allows you to install or update an ESXi VIB from a URL source. Most recently, there a twitter conversation between myself, Etienne and Christian Mohn on how this capability could be further extended to include updating ESXi itself which can either be from an Image Profile or offline bundle. For those with a detailed eye, you may have noticed that the same API method can also support an offline bundle URL which would make this possible. As of right now, the feature is actually included in an internal build of the Embedded Host Client, but perhaps we will see this in a future update of the Embedded Host Client? 😉

Going back to the original topic of this blog post, to use the VIB install/update mechanism, you would need to first upload the ESXi VIB to an HTTP Server and then specify the URL. This is fine if you have an existing HTTP Server but if you do not, it is sort of a pain and though there are other methods like uploading directly to the ESXi's python based HTTP Server as mentioned by Christian, it would still require using something like SCP which is an additional step. My initial goal and hope was to be able to install or update an ESXi VIB or ESXi itself using purely the Embedded Host Client. This would keep things simple and not require things like SSH to be enabled on the ESXi host.

After a bit of brainstorming with Etienne, he actually found a super clever way of accomplishing this after our conversation. The idea I had was to make use of the ESXi Datastore to store the VIB which can be uploaded through the Embedded Host Client. By default, there is also an HTTP based interface to the datastore, however it requires authentication which would be a problem. The neat idea that was suggested was why not try to specify the local VMFS path to the ESXi VIB (e.g./vmfs/volumes/datastore1/my.vib)? It turns out that this actually works as well!

With just two easy steps, you can now upload an ESXi VIB and then install/update all using just the Embedded Host Client with no additional dependencies

Step 1 - Navigate to the Datastore section in the Embedded Host Client and then upload the ESXi VIB that you wish to install or update.

Step 2 - To install/update the VIB, click on Help in the upper right hand corner of the Embedded Host Client and select the "Update" option. Specify the local VMFS path to ESXi VIB and then click on Update to apply.

Note: A reboot may be required after applying a new VIB. It will be your responsibility to shutdown the VMs and reboot the ESXi host for changes to go into effect if required.

At this point, you should also see a task kicked off applying the VIB. If there are any errors thrown, they will be displayed else you should see a successful task completion. For educational purposes, here is a quick screenshot of /var/log/esxupdate.log showing the VIB being applied, this can be used for further troubleshooting if required.

Hope you enjoyed this neat little trick and with just two easy steps you can install or update any ESXi VIB using the Embedded Host Client without additional dependencies or enabling SSH on the ESXi host.

Automating the silent installation of Site Recovery Manager 6.0/6.1 w/Embedded vPostgres DB

For customers looking to Automate the latest release of Site Recovery Manager 6.0 / 6.1 with an Embedded vPostgres DB, you may have found that my previous deployment scripts for SRM 5.8 no longer work with the latest release. The reason for this is that SRM 6.x now supports the Platform Services Controller (PSC) and in doing so, there are a couple of new silent installer flags that are now required. With the help of the SRM Engineering team, I was able to modify my script to include these new options for automating the silent installation of both SRM 6.0 and 6.1. You can download the new script called install_srm6x.bat.

Before using this script, I highly recommend that you take a look my previous article here which provides more details on how the script works in general.

There are 5 new silent options that have been introduced with SRM 6.x which are all required:

  • PLATFORM_SERVICES_CONTROLLER_HOST - The hostname of the Platform Services Controller
  • PLATFORM_SERVICES_CONTROLLER_PORT - The port for the PSC, default is 443 (recommend leaving this the default)
  • SSO_ADMIN_USER - The SSO Administrator account (e.g. administrator@vsphere.local)
  • SSO_ADMIN_PASSWORD - The SSO Administrator password

In addition to the above options, you will still need to populate the following options below and the script outlines which options need to be modified before running the script.

  • SRM_INSTALLER - The full path to the SRM 6.x installer
  • DR_TXT_VCHOSTNAME - vCenter Server Hostname
  • DR_TXT_VCUSR - vCenter Server Username
  • DR_TXT_VCPWD - vCenter Server Password
  • VC_CERTIFICATE_THUMBPRINT - vCenter Server SSL SHA1 Thumbprint (Must be in all CAPS)
  • DR_TXT_LSN - SRM Local Site Name
  • DR_TXT_ADMINEMAIL - SRM Admin Email Address
  • DR_CB_HOSTNAME_IP - SRM Server IP/Hostname
  • DR_TXT_CERTPWD - SSL Certificate Password
  • DR_TXT_CERTORG - SSL Certificate Organization Name
  • DR_TXT_CERTORGUNIT - SSL Certification Organization Unit Name
  • DR_SERVICE_ACCOUNT_NAME - Windows System Account to run SRM Service

Note: If you deployed either your vCenter Server or PSC using FQDN, be sure to specify that for both DR_TXT_VCHOSTNAME and PLATFORM_SERVICES_CONTROLLER_HOST. This is a change in behavior compared to SRM 5.8 which only required the IP Address of the vCenter Server.

If you run into any issues, you can take a look at the logs that are generated. From what I have seen, you will normally get a 1603 error code which you need to step back through the logs and eventually you will see the actual error.