KMIP Server Docker Container for evaluating VM Encryption in vSphere 6.5

There are a number of vSphere Security enhancements that were introduced in vSphere 6.5 including the much anticipated VM Encryption feature. To be able to use the new VM Encryption feature, you will need to first setup a Key Management Interoperability Protocol (KMIP) Server if you do not already have one and associate it with your vCenter Server. There are plenty of 3rd party vendors who provide KMIP solutions that interoperate with the new VM Encryption feature, but it usually can take some time to get access to product evaluations.

During the vSphere Beta, VMware had provided a sample KMIP Server Virtual Appliance based on PyKMIP, which allowed customers to quickly try out the new VM Encryption feature. Many of you have expressed interest in getting access to this appliance for quick evaluational purposes and the team is currently working on providing an updated version of the appliance for customers to access. In the mean time, for those who can not wait for the appliance or would like an alternative way of quickly standing up a sample KMIP Server, I have created a tiny (163 MB) Docker Container which can be easily spun up to provide the KMIP services. I haver published the Docker Container on Docker Hub at lamw/vmwkmip. The beauty of the Docker Container is you do not need to deploy another VM and for resource constrained lab environments or quick demo purposes, you could even run it directly on the vCenter Server Appliance (VCSA) as shown here, obviously not recommended for production use.

The Docker Container bundles the exact same version of PyKMIP that will be included in the virtual appliance, this is just another consumption mechanism. It is also very important to note that you should NOT be using this for any production workloads or any VMs that you care about. For actual production deployments of VM Encryption, you should be leveraging a production grade KMIP Server as PyKMIP stores the encryption keys in memory and will be lost upon a restart. This will also be true even for the virtual appliance, so this is really for quick evaluational purposes.

Note: The version of PyKMIP is a modified version and VMware plans to re-contribute their changes back to the PyKMIP open-source project so others can also benefit.

Below are the instructions on using the KMIP Server Docker Container and how to configure it with your vCenter Server. I will assume you have worked with Docker before, if you have not, please have a look at Docker online resources before continue further or wait for the virtual appliance to be posted.

Step 1 - On system that has a Docker Client, run the following command to pull down the Docker Container:

docker pull lamw/vmwkmip

Step 2 - Start the Docker Container by running the following command:

docker run --rm -it -p 5696:5696 lamw/vmwkmip

sample-kmip-server-for-testing-vm-encryption-0
As you can see, the PyKMIP service has successfully started and by default, it has been configured to use the standard port which is 5696. If you do not want to run the Docker Container in interactive mode, you can run it in daemon mode by running the following command instead:

docker run -d -p 5696:5696 lamw/vmwkmip

Step 3 - Next, we need to associate the KMIP Server with our vCenter Server. Login to the vSphere Web Client and under the vCenter Server object, select Configure->Key Management Servers and add a new KMS. You will need to provide a name/alias, the IP Address of where the Docker Container is running and the default port number as shown in the screenshot below.

sample-kmip-server-for-testing-vm-encryption-1
Step 4 - Once connected to the KMIP Server, you should be presented with a Trust Certificate dialog which you just need accept once.

sample-kmip-server-for-testing-vm-encryption-2
Step 5 - If everything was configured correctly and the vCenter Server can communicate with the KMIP Server, you should see that both the Connection Status and Certificate Status display green. If you are not getting this, it means there is most likely a connection issue between your vCenter Server and the Docker Container, check to make sure you do not have any firewalls blocking the connection from where the Docker Contain is running.

sample-kmip-server-for-testing-vm-encryption-3
At this point, you can now start encrypting your VMs. To do so, you simply apply the VM Encryption Policy on either the full VM (VM Home + VMDKs) or to individual VMDKs and let the Policy Engine do its magic.

sample-kmip-server-for-testing-vm-encryption-4
After the VM Storage Policy has been applied successfully, you can view the Encryption status by clicking on the VM Hardware portlet for the VM as shown in the screenshot below.

sample-kmip-server-for-testing-vm-encryption-5
Once you are done with your testing, you can remove the VM Encryption storage policy from the VMs and delete the KMS from the vCenter Server. If for whatever reason your KMIP server terminates, you can simply just remove the KMS from vCenter Server and relaunch a new instance by going through the setup instructions again. For more information about VM Encryption, please take a look at the official documentation which can be found here. Happy VM Encrypting 🙂

VCSA 6.5 CLI Installer now supports new ovftool argument pass-through feature

I had recently discovered a really cool new feature that has been added into the vCenter Server Appliance (VCSA) 6.5 CLI Installer while helping out a fellow colleague. For those of you who have not worked with the VCSA before, you can deploy it using one of two methods: 1) Guided UI Installer 2) Scripted CLI installer. The latter approach is great for Automation purposes as well as being able to quickly spin up a new VCSA as the UI wizard can get tedious once you have done it a few times. The VCSA CLI Installer reads in a JSON configuration file which defines what you will be deploying whether that is an Embedded, PSC or VC node and its respective configuration (networking, password, etc).

In VCSA 6.5, the CLI Installer introduces a new option in the JSON schema called ovftool.arguments. Here is the description of what this new option provides:

Use this subsection to add arbitrary arguments to the OVF Tool
command that the script generates. You do not need to fill it out in
most circumstances.

First of all, I just want to mention that this option should not be required for general deployments but it may come in handy for more advanced use cases. Behind the scenes, the CLI Installer takes the human readable JSON and translates that to a set of OVF properties that are then sent down to ovftool for deployment. Not every single option is implemented in the JSON and for good reason as those level of details should be abstracted away from the end users. However, there may be cases where you may need to invoke a specific configuration or trigger a specific ovftool option and this would allow you to provide what I am calling a "pass-through" to ovftool.

Let me give you one concrete example on how this could be useful and how we can take advantage of this new capability. Since the release of VCSA 6.0, when you enable SSH and you login, you will notice that you are not placed in a regular bash shell but rather a restricted appliancesh interface. From an Automation standpoint, it was some what painful if you wanted to change the default as this feature is not implemented within the JSON configuration file. This meant that if you wanted the bash shell to be the default, you had to either change it manually as part of a post-deployment or you would have to by-pass the native CLI Installer and manually reverse engineer the required set of OVF properties needed for the deployment which is also not ideal.

In the case of changing the default shell, the required ovftool option that is required is the following (described in more detail in this blog post here):

--prop:guestinfo.cis.appliance.root.shell="/bin/bash"

Using this new ovftool argument pass-through, we can now specify that option directly inside of the JSON using the following:

"ovftool.arguments" : {
"prop:guestinfo.cis.appliance.root.shell" : "/bin/bash"
}

Below is a complete working example of an updated JSON configuration for deploying an Embedded VCSA 6.5 which includes this new ovftool arguments property.

Once the VCSA has successfully completed, you will find that when you SSH to the VCSA, you will now automatically be defaulted to the bash shell rather than the restricted appliancesh. As you can see, this a pretty powerful extension of the existing VCSA CLI Installer without compromising on the current user experience. Stay tuned for a future blog post on some other interesting use cases this will help enable 🙂

Automating the installation of VUM Update Manager Download Service (UMDS) for Linux in vSphere 6.5

One of the most highly requested feature from customers with regards to the adoption of the vCenter Server Appliance (VCSA) is to have vSphere Update Manager (VUM) available as a Virtual Appliance. With the vSphere 6.5 release, this is now a reality as VUM is now embedded within the VCSA. The VUM service is also automatically enabled and associated with the vCenter Server instance which means from a customer standpoint, it is zero touch to get VUM up and running!

In addition to VUM being part of the VCSA 6.5, there is also the VUM Update Manager Download Service (UMDS) that can be installed on a separate Linux system. You can find the UMDS installer within the VCSA 6.5 ISO under the umds directory. To install UMDS, there are several pre-requisites that you must meet, some of which are documented here. The other requirements which are not documented are the additional OS package dependencies required to run the UMDS installer. While going through this by hand the first time, I found the following packages were required to install on an Ubuntu 16.04.1 distribution:

  • perl
  • tar
  • sed
  • psmisc
  • unixodbc
  • postgresql
  • postgresql-contrib
  • odbc-postgresql

For those of you who know me, if I have to perform something manually once, I might as well automate it for the future 🙂 I decided to create this quick shell script called install_umds65.sh which will allow you to easily deploy UMDS on the latest Ubuntu LTS distribution (16.04.1 at the time of writing this). This can be useful for automated deployments or quickly standing up a lab environment.

When you install UMDS manually, you are prompted for several responses and the script currently just uses those defaults. If you wish to change them, you simply just need to edit the "answer" file that the script generates to provide to the UMDS installer itself.

Here is what the script is doing at a high level:

  1. Extract the UMDS installer into /tmp
  2. Install all OS package dependencies
  3. Create UMDS installer answer file /tmp/answer
  4. Create the /etc/odbc.ini and /etc/odbcinst.ini configuration file
  5. Updating pg_hba.conf to allow UMDS user to access the DB
  6. Start Postgres DB
  7. Create UMDSDB user and setting the assigned password
  8. Install UMDS

Step 1 - Upload both the UMDS install script (install_umds65.sh) as well as the UMDS install package found in the VCSA 6.5 ISO to an already deployed Ubuntu system

Step 2 - The script needs to run as root and it requires the following 5 command-line options:

  1. UMDS package installer
  2. Name of the UMDS Database
  3. Name of the UMDS DSN Entry
  4. Username for running the UMDS service
  5. Password for the UMDS username

Here is an example of running the script:

sudo ./install_umds65.sh VMware-UMDS-6.5.0-4540462.tar.gz UMDSDB UMDS_DSN umdsuser VMware1!

Step 3 - Once the UMDS installer script completes, you can verify by running the following two commands which provides you with the version of UMDS as well as the current configurations:

/usr/local/vmware-umds/bin/vmware-umds -v

automate-vum-umds-vsphere-6-5-install-0

/usr/local/vmware-umds/bin/vmware-umds -G

automate-vum-umds-vsphere-6-5-install-1
At this point, you have now successfully installed UMDS. You can use the vmware-umds CLI to add/remove patch repository as well as initiate the download by using the -D option. Once you have download all of your content, you will need to setup an HTTP server to make it available to VUM instance in the vCenter Server Appliance (VCSA). You can configure any popular HTTP Server such as Nginx or Apache. For my lab environment, I actually just use the tiny HTTP server that Python can provide.

To make the content under /var/lib/vmware-umds available, just change into that directory and run the following command:

python -m SimpleHTTPServer

By default, this will serve use port 8080, but you can change it by simple appending a port number like: python -m SimpleHTTPServer 8081. Now if you open a browser to the IP Address and port of the UMDS Server, you should see directory listing of the files. You can take this URL and add that into your VUM instance.

Custom script bundle is now possible with Auto Deploy in vSphere 6.5

It has been some time since I had looked at the Auto Deploy and Host Profile feature in vSphere. As a former customer, I still remember one of the challenges I had in evaluating Auto Deploy and specifically Host Profiles was the fact that it did not cover all the possible ESXi configurations. This made it very difficult to operationalize as you still needed to handle post-configurations through other mechanisms. Trying to keep both solutions did not make sense for me and I ended up opting for the traditional scripted installation method via Kickstart which I had hooks into automate the full ESXi configuration.

In vSphere 6.5, there was a huge effort to significantly improve both Auto Deploy and Host Profile to what customers had expected of this feature, especially around bringing parity between the configurations that could be done using the vSphere Clients and/or vSphere APIs into Host Profiles. In addition, there was also several UI enhancements that now makes it possible to use both Auto Deploy and Image Builder directly from the vSphere Web Client which was never possible before. For more details, be sure to check out the What's New vSphere 6.5 white paper here.

One new feature that I think is worth calling out is the new Script Bundle capability in Auto Deploy. Previously, if a particular configuration was not available via Host Profiles, there was nothing you could really do and you had to write your own custom post-deployment script to apply to the ESXi host. As I mentioned earlier, in vSphere 6.5, we have closed the gap on the ESXi configurations that were not possible using Host Profile and will ensure that will be in sync going forward. Having said that, there are still certain configurations that are not possible today such as creating a custom ESXi firewall rule for example. For these cases, you either had to either hack it up using a method like this or to create a custom ESXi VIB which would then force customers to lower their ESXi's software acceptance level which was not ideal nor acceptable, especially for customers that are security conscious.

With this new Script Bundle capability, customers will now have the ability to add a post-deployment script that will run after all the configurations have been applied to a stateless ESXi host that has been provisioned by Auto Deploy. The script must be either be a Busybox ash or Python script as it executes within the ESXi Shell and all limitations that exists today within that environment will still apply when creating these custom scripts. To manage these Script Bundles, there are two new PowerCLI cmdlets called Add-ScriptBundle and Get-ScriptBundle Unfortunately, this new capability is not available when using the vSphere Web Client, hopefully this is something the team will be adding into a future update.

A Script Bundle can be composed of multiple scripts which must be contained within a single tarball (gzip compressed tar archive) with the .tgz extension. Once uploaded, you will be able to associate a Script Bundle with an Auto Deploy rule just like you would with an Image Profile and/or Host Profile. Since I was not able to find much documentation on this feature, I figure a real life example would be helpful not only for myself but for anyone who might be interested in leveraging this new capability.

Before getting started, make sure you have a vSphere 6.5 environment (if you need to quickly deploy a full env, have a look at this blog post here) as well as PowerCLI 6.5 R1 installed on your desktop. You will also need to download the ESXi 6.5 offline depot image which will be used to boot our ESXi hosts from Auto Deploy.

For our examples script, we will using a simple ash script to create a custom ESXi firewall rule as shown in the example below:

Step 1 - Create the script(s) on a Linux/UNIX system as we will need to use the tar command to bundle them up. In our example, I have named the script vGhetto.sh but you can name it anything.

Step 2 - Create a tarball that only consists of the script(s) you want to include by running the following command (if you have multiple scripts, you will want to list all those files):

tar -cvzf vGhetto-Script.tgz vGhetto.sh

Note: Directories are not allowed within the tarball and you will get an error when trying to import the script bundle. Whatever name you chose for the tarball, it will automatically be used as the name to identify it once you have imported it. Make sure to use a name that is descriptive to help you identify the different script bundles.

Step 3 - Make sure you enable both the Auto Deploy and Image Builder Service using the vSphere Web Client under the Administration->Services tab of your vCenter Server.

Step 4 - Next, connect to your vCenter Server using thePowerCLI  Connect-VIServer cmdlet to start using the Auto Deploy and Image Builder. First, we will import our script bundle by running the following command and specifying the path to our script bundle:

Add-ScriptBundle C:\Users\primp\Desktop\vGhetto-Script.tgz

Once imported, you can run the Get-ScriptBundle command to see the list of script bundles that have been added as well as the individual script files in each bundle.

Step 5 - Next, we will import our ESXi 6.5 Image Profile by running the following command:

Add-EsxSoftwareDepot -DepotUrl C:\Users\primp\Desktop\VMware-ESXi-6.5.0-4564106-depot.zip

Once imported, you can run the Get-EsxSoftwareDepot command to retrieve the different Image Profiles that is included in the offline bundle. In our example, we will be using ESXi-6.5.0-4564106-no-tools

Step 6 - Now we will create our new Auto Deploy rule which you will need to give it a name and a list of "Items" which are strings mapping to the name of the Script Bundle and Image Profile that we had uploaded earlier. You can also specify a Host Profile if you had created one, for this example, I have opted to leave it out. The other thing you can add is a location where the ESXi host will be attached to whether that is the name of a vSphere Datacenter or vSphere Cluster. Instead of matching to a specific host pattern like MAC Address or IP Address, I just used the -AllHosts, you can change this of course.

New-DeployRule -Name "ESXi-6.5-with-vGhetto-ScriptBundle" -Item "vGhetto-Script","ESXi-6.5.0-4564106-no-tools","Datacenter" -AllHosts | Add-DeployRule

auto-deploy-vsphere-6-5-script-injection-0
Once the Auto Deploy rule has been created, you can retrieve it by using the Get-DeployRule command or directly view the configuration from within the vSphere Web Client under the Auto Deploy UI.

auto-deploy-vsphere-6-5-script-injection-1
Step 7 - Finally, the last step is to power on an ESXi host and watch Auto Deploy do its magic. In my environment, I just used an empty Nested ESXi VM which is probably the quickest way to test Auto Deploy and Host Profiles.

If the ESXi host had successfully booted from Auto Deploy, you should see it under the new "Deployed Hosts" tab as shown in the screenshot below.

auto-deploy-vsphere-6-5-script-injection-2
If we go to the console of the ESXi host and enable SSH (since we are using a default Image Profile, SSH is disabled by default), we can confirm that our script executed correctly, but we can also see where the files are placed in case of troubleshooting purposes. The Script Bundle that we created will be stored in /etc/rc.local.d/autodeploy directory and it will be automatically extracted and placed into the /etc/rc.local.d/autodeploy/scripts directory for execution. I assume the scripts will be executing in alphabetical/numeric order, so if you have a set of scripts that need to run in a particular order, you may want to prefix it with 001 for example.

auto-deploy-vsphere-6-5-script-injection-3
With all the enhancements in vSphere 6.5 for Auto Deploy, Host Profile and the new Script Bundle capability which addresses some of the challenges for stateless deployments, I think we now have a compelling and complete story that customers should consider exploring further. I know I definitely will be spending more time with both of these features going forward. If you have any feedback about this new feature or any other comments on Auto Deploy or Host Profile, feel free to leave your feedback and I will be sure to forward it over to the Product Manager.

vGhetto Automated vSphere Lab Deployment for vSphere 6.0u2 & vSphere 6.5

For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂

Requirements:

  • 1 x Physical ESXi host running at least ESXi 6.0 Update 2
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance
  • vCenter Server Appliance (VCSA) 6.0 or 6.5

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

vsphere-6-5-vghetto-lab-deployment-0
For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.

Scripts:

There are four different scripts which covers the scenarios we discussed above:

Configurations:

There are six sections towards the top of the script that you will need to edit before running the script. Each section is described below and should be pretty explanatory.

This section describes the credentials to your physical ESXi server in which the vSphere lab environment will be deployed to:

$VIServer = "himalaya.primp-industries.com"
$VIUsername = "root"
$VIPassword = "vmware123"

This section defines the number of Nested ESXi VMs to deploy along with their associated IP Address(s). The names are merely the display name of the VMs when deployed. At a minimum, you should deploy at least three hosts, but you can always add additional hosts and the script will automatically take care of provisioning them correctly.

$NestedESXiHostnameToIPs = @{
"vesxi65-1" = "172.30.0.171"
"vesxi65-2" = "172.30.0.172"
"vesxi65-3" = "172.30.0.173"
}

This section describes the resources allocated to each of the Nested ESXi VM(s). Depending on the deployment type, you may need to increase the resources. For Memory and Disk configuration, the unit is in GB.

$NestedESXivCPU = "2"
$NestedESXivMEM = "6"
$NestedESXiCachingvDisk = "4"
$NestedESXiCapacityvDisk = "8"

This section describes the VCSA deployment configuration such as the VCSA deployment size, Networking & SSO configurations. If you have ever used the VCSA CLI Installer, these options should look familiar.

$VCSADeploymentSize = "tiny"
$VCSADisplayName = "vcenter65-1"
$VCSAIPAddress = "172.30.0.170"
$VCSAHostname = "vcenter65-1.primp-industries.com"
$VCSAPrefix = "24"
$VCSASSODomainName = "vghetto.local"
$VCSASSOSiteName = "virtuallyGhetto"
$VCSASSOPassword = "VMware1!"
$VCSARootPassword = "VMware1!"

This section describes the location as well as the generic networking settings applied to BOTH the Nested ESXi VM and VCSA.

$VMNetwork = "dv-access333-dev"
$VMDatastore = "himalaya-local-SATA-dc3500-2"
$VMNetmask = "255.255.255.0"
$VMGateway = "172.30.0.1"
$VMDNS = "172.30.0.100"
$VMNTP = "pool.ntp.org"
$VMPassword = "vmware123"
$VMDomain = "primp-industries.com"
$VMSyslog = "172.30.0.170"
$VMSSH = "true"
$VMVMFS = "false"

This section describes the configuration of the new vCenter Server from the deployed VCSA.

$NewVCDatacenterName = "Datacenter"
$NewVCVSANClusterName = "VSAN-Cluster"

Logging:

There is additional verbose logging that outputs as a log file in your current working directory either vsphere60-vghetto-lab-deployment.log or vsphere65-vghetto-lab-deployment.log depending on the deployment you have selected.

Verification:

Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:

vsphere-6-5-vghetto-lab-deployment-4

Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-1
Here is an example of running a vSphere 6.5 "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-2
Here is an example of running a vSphere 6.0 "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-3
If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-5
or the following for "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-6
I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!