For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂


  • 1 x Physical ESXi host running at least ESXi 6.0 Update 2
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance
  • vCenter Server Appliance (VCSA) 6.0 or 6.5

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.


There are four different scripts which covers the scenarios we discussed above:


There are six sections towards the top of the script that you will need to edit before running the script. Each section is described below and should be pretty explanatory.

This section describes the credentials to your physical ESXi server in which the vSphere lab environment will be deployed to:

$VIServer = ""
$VIUsername = "root"
$VIPassword = "vmware123"

This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly.

$NestedESXiHostnameToIPs = @{
"vesxi65-1" = ""
"vesxi65-2" = ""
"vesxi65-3" = ""

This section describes the resources allocated to each of the Nested ESXi VM(s). Depending on the deployment type, you may need to increase the resources. For Memory and Disk configuration, the unit is in GB.

$NestedESXivCPU = "2"
$NestedESXivMEM = "6"
$NestedESXiCachingvDisk = "4"
$NestedESXiCapacityvDisk = "8"

This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar.

$VCSADeploymentSize = "tiny"
$VCSADisplayName = "vcenter65-1"
$VCSAIPAddress = ""
$VCSAHostname = ""
$VCSAPrefix = "24"
$VCSASSODomainName = "vghetto.local"
$VCSASSOSiteName = "virtuallyGhetto"
$VCSASSOPassword = "VMware1!"
$VCSARootPassword = "VMware1!"

This section describes the location as well as the generic networking settings applied to BOTH the Nested ESXi VM and VCSA.

$VMNetwork = "dv-access333-dev"
$VMDatastore = "himalaya-local-SATA-dc3500-2"
$VMNetmask = ""
$VMGateway = ""
$VMDNS = ""
$VMNTP = ""
$VMPassword = "vmware123"
$VMDomain = ""
$VMSyslog = ""
$VMSSH = "true"
$VMVMFS = "false"

This section describes the configuration of the new vCenter Server from the deployed VCSA.

$NewVCDatacenterName = "Datacenter"
$NewVCVSANClusterName = "VSAN-Cluster"


There is additional verbose logging that outputs as a log file in your current working directory either vsphere60-vghetto-lab-deployment.log or vsphere65-vghetto-lab-deployment.log depending on the deployment you have selected.


Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:


Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:

Here is an example of running a vSphere 6.5 "Self Managed" deployment:

Here is an example of running a vSphere 6.0 "Standard" deployment:

If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

or the following for "Self Managed" deployment:

I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!

22 thoughts on “vGhetto Automated vSphere Lab Deployment for vSphere 6.0u2 & vSphere 6.5

  1. Hi William.

    This is great! – do you have any plans to release scripts to load a lab like this into workstation instead of ESXi?



  2. William, looking forward to trying this out on my new Skull Canyon NUC (with two NVMe SSD drives). Will this work pointed at that? I know you had some previous posts about bootstrapping into such an environment but I thought perhaps you needed some non-flash storage also?

    • And, it doesn’t work. I believe I need storage or network (a dv switch?) configured on the physical host, which is a 6.0.0U2 booted from USB on my NUC. I think we need clarification on these two parameters:

      $VMNetwork = “dv-access333-dev”
      $VMDatastore = “himalaya-local-SATA-dc3500-2”

      For either config, or at least (I’m trying) the self-managed:
      Do these need to already exist as whatever we set them to be on the physical host? Which means we need physical storage at the lowest physical layer? This isn’t clear from your instructions, which (I gathered and could be completely wrong) said the script installs it all. Too much to wish for of course. 🙂

      I’ll try again with a USB datastore mounted and a dv switch created. But some confirmation would be wonderful, I’d love to get this working.

      Thanks much!

        • [11-22-2016_03-20-24] Deploying the VCSA …
          Task failed. Status: ERROR Progress: 5% Starting VMware Identity Management
          Service… Error: Problem Id: None Component key: idm Detail:
          Encountered an internal error. Traceback (most recent call last): File
          “/usr/lib/vmidentity/firstboot/”, line 2018, in main
          vmidentityFB.boot() File
          “/usr/lib/vmidentity/firstboot/”, line 349, in boot
          self.configureSTS(self.__stsRetryCount, self.__stsRetryInterval) File
          “/usr/lib/vmidentity/firstboot/”, line 1479, in
          configureSTS self.startSTSService() File
          “/usr/lib/vmidentity/firstboot/”, line 1141, in
          startSTSService returnCode = self.startService(self.__sts_service_name,
          self.__stsRetryCount * self.__stsRetryInterval) File
          “/usr/lib/vmidentity/firstboot/”, line 88, in
          startService return service_start(svc_name, wait_time) File
          “/usr/lib/vmware/site-packages/cis/”, line 784, in service_start
          raise ServiceStartException(svc_name) ServiceStartException: { “resolution”:

          null, “detail”: [ { “args”: [
          “vmware-stsd” ], “id”:
          “install.ciscommon.service.failstart”, “localized”: “An error
          occurred while starting service ‘vmware-stsd'”, “translatable”: “An

          error occurred while starting service ‘%(0)s'” } ],
          “componentKey”: null, “problemId”: null } Resolution: This is an
          unrecoverable error, please retry install. If you run into this error again,
          please collect a support bundle and open a support request.
          Collecting the support bundle from the deployed appliance…

          anyone can help with this error?

  3. Hi William,

    This is awesome…I’ve just binned my script for yours, it’s great!

    I had trouble getting the OVF configuration to stick to my nested ESXi hosts (I’m deploying to vCenter (VCSA 6.0u2)) using the “$vm.ExtensionData.ReconfigVM_Task($spec)” method.

    Instead I used Get-OvfConfiguration and set the properties before using Import-VApp with the -OvfConfiguration parameter:

    $ovfConfig = Get-OvfConfiguration -Ovf $NestedESXiApplianceOVA
    $ovfConfig.Common.guestinfo.hostname.Value = $VMName
    $ovfConfig.Common.guestinfo.ipaddress.Value = $VMIPAddress
    $ovfConfig.Common.guestinfo.netmask.Value = $VMNetmask
    $ovfConfig.Common.guestinfo.gateway.Value = $VMGateway
    $ovfConfig.Common.guestinfo.dns.Value = $VMDNS
    $ovfConfig.Common.guestinfo.domain.Value = $VMDomain
    $ovfConfig.Common.guestinfo.ntp.Value = $VMNTP
    $ovfConfig.Common.guestinfo.syslog.Value = $VMSyslog
    $ovfConfig.Common.guestinfo.password.Value = $VMPassword
    $ovfConfig.Common.guestinfo.ssh.Value = $VMSSH
    $ovfConfig.Common.guestinfo.createvmfs.Value = $VMVMFS
    $ovfConfig.NetworkMapping.VM_Network.Value = $network

    Write-Log “Deploying Nested ESXi VM $VMName …”
    $vm = Import-VApp -Server $vCenter -VMHost $pEsxi -Source $NestedESXiApplianceOVA -OvfConfiguration $ovfConfig -Name $VMName -Location $cluster -Datastore $datastore -DiskStorageFormat thin

    Hope that helps!

  4. Hi William, thanks a lot for this awesome work!

    Just for those folks like me who try to use the script with a system proxy set (even with local bypass), you’ll got this kind of error until you disable it :

    2016-11-24 16:15:37,447 – vCSACliInstallLogger – ERROR – An error occurred when connecting to “”: Failed to login to host, as user root: [Errno socket error] [Errno 1] _ssl.c:510: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

    This looks like a limitation of the invoke-Expression cmdlet

  5. For those getting this error when running the script, you need to have the PortGroup set to Ephemeral connecting to the ESXi host.

    [11-27-2016_04:46:04] Deploying Nested ESXi VM vesxi65-1 …
    Set-NetworkAdapter : 11/27/2016 4:47:12 PM Set-NetworkAdapter The specified DV portgroup ‘LAB-701’ can not be used for vnic connection: Invalid portgroup type.
    At C:\Users\anthony.spiteri\Dropbox\Files\scripts\vsan_lab_deploy_2..ps1:196 char:51
    + … er $pEsxi | Set-NetworkAdapter -Portgroup $network -confirm:$false | …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidArgument: (:) [Set-NetworkAdapter], InvalidArgument
    + FullyQualifiedErrorId : Client20_VirtualNetworkServiceImpl_TryGetAvailableDVPortGroup_NotValidPortgroupType,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.SetNetworkAdapter
    Set-NetworkAdapter : 11/27/2016 4:47:12 PM Set-NetworkAdapter The specified DV portgroup ‘LAB-701’ can not be used for vnic connection: Invalid portgroup type.
    At C:\Users\anthony.spiteri\Dropbox\Files\scripts\vsan_lab_deploy_2..ps1:196 char:51
    + … er $pEsxi | Set-NetworkAdapter -Portgroup $network -confirm:$false | …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidArgument: (:) [Set-NetworkAdapter], InvalidArgument
    + FullyQualifiedErrorId : Client20_VirtualNetworkServiceImpl_TryGetAvailableDVPortGroup_NotValidPortgroupType,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.SetNetworkAdapter

  6. Great work. The deployment works great. My only problem is that the ESXi hosts deployet is not accessible through the vSphere Client or Web GUI. So, the add ESXi to cluster failed. But this is not new to me. The same thing happends to my fresh installed nested ESXi host (not with appliance). When it bootes up for the first time it is not possible to login through the vSphere Client or Web GUI with the given DHCP address. My only “fix” was to power o ff the VM (not shutdown) and power it back on. Then the password was set to blank and I could login to the address from the DHCP server. No difference if no DHCP is active and I have to configure static IP address for the first time, still unavailable. Any tips where I should start look to fix this? The pESXi have the Mac Learn dvFilter configured. This gives me headache..

    Again, great job.


Thanks for the comment!