vCenter Server High Availability (VCHA) PowerCLI 6.5 community module

As some of you may know, I have been spending some time with the new vCenter Server High Availability (VCHA) feature that was introduced in vSphere 6.5. In fact, I had even published an article a few weeks back on how to enable the new vCenter Server High Availability (VCHA) feature with only a single ESXi host which allowed me to explore some of the new VCHA APIs without needing a whole lot of resources to start with, obviously, you would not do this in production 🙂

For those of you who are not familiar with the new VCHA feature which is only available with the vCenter Server Appliance (VCSA), Feidhlim O'Leary has an excellent write up that goes over the details and even provides demo videos covering both the "Basic" and "Advanced" workflows of VCHA. I highly recommend you give his blog post a read before moving forward as this article will assume you understand how VCHA works.

In playing with the new VCHA APIs, I decided to create a few VCHA functions which I thought would be useful to have as a PowerCLI module for others to use and also try out. With that, I have published my VCHA.psm1 module on the PowerCLI Community Repo on Github which includes the following functions:

Name Description
Get-VCHAConfig Retrieves the VCHA Configuration
Get-VCHAClusterHealth Retrieve the VCHA Cluster Health
Set-VCHAClusterMode Sets the VCHA Cluster Mode (Enable/Disable/Maintenance)
New-VCHABasieConfig Creates a new "Basic" VCHA Cluster
Remove-VCHACluster Destroys a VCHA Cluster

As noted earlier, VCHA Cluster can be deployed using either a "Basic" or "Advanced" workflow. The VCHA PowerCLI module currently only implements the "Basic" workflow. For those interested in the Advanced workflow, you are more than welcome to extend the script but note that it does require leveraging additional VCHA APIs than the ones used in the Basic workflow. Make sure you also have PowerCLI 6.5 R1 installed before trying to use the module.

Here is a screenshot of my vSphere 6.5 environment which has a self-managed VCSA (which will be required for the Basic workflow) OR you can have management cluster that hosts the VCSA you wish to enable VCHA on as long as it is joined to the same SSO Domain. For management clusters that do not share the same SSO Domain with the VCSA that you want to enable VCHA on, then you will have to use the Advanced workflow. You must also enable SSH on the VCSA before attempting to configure VCHA or else you will run into an error. This is something VCHA itself requires and has nothing to do with the script, you will see this behavior regardless of using the UI or API. SSH can be disabled after VCHA is setup.

vcenter-high-availablity-apis-using-powercli-0
To create a new VCHA Cluster, we will use the New-VCHABasicConfig command and you will need to specify the following:

  • Name of VCSA VM
  • Name of the VCHA Network (Virtual Portgroup or Distributed Portgroup)
  • Active VCSA HA IP Address / Netmask
  • Passive VCSA HA IP Address / Netmask
  • Witness HA IP Address / Netmask
  • Name of the Passive and Witness vSphere Datastore to use
  • vSphere Credentials to the VCSA

Here is an example command for my environment:

Depending on your compute and storage resources, this can take some time while the Passive and Witness VCSA is being cloned from the Active VCSA. Once the operation has completed, you can refresh the vCenter HA tab in the vSphere Web Client and you should see that VCHA is now enabled as shown in the screenshot below.

vcenter-high-availablity-apis-using-powercli-2
Similarly to the UI, we can also retrieve the current configuration as well as health of the VCHA Cluster.

To get the VCHA Configuration, you can use the Get-VCHAConfig command.

vcenter-high-availablity-apis-using-powercli-3
To get the VCHA Cluster health, you can use the Get-VCHAClusterHealth command:

vcenter-high-availablity-apis-using-powercli-4
The VCHA Cluster can also be placed into different "Modes" such as Enabled, Disabled or Maintenance. To do so, you can use the Set-VCHAClusterMode which includes boolean flags for each of the modes. For example, if you wanted to disable the VCHA Cluster, you would run the following command:

Set-VCHAClusterMode -Disabled $true

vcenter-high-availablity-apis-using-powercli-5
To view the current VCHA Cluster Mode, you can simply use the Get-VCHAConfig command.

Finally, if you wish to destroy the VCHA Cluster, there is the Remove-VCHAConfig command which supports two additional flags. One that by-passes confirmation that you wish to destroy the VCHA Cluster (basically a safety protection) and the other whether or not to delete the VMs after the VCHA Cluster has been destroyed. By default, the VCHA APIs does not offer a native way of automatically deleting the VMs and if you have used the UI, you will see the UI adds this additional functionality which I have also done so in the VCHA module. If either flags are committed, then you will be prompted by the script.

Here is an example of automatically confirming to destroy the VCHA Cluster as well as deleting the VMs afterwards:

Remove-VCHAConfig -DeleteVM $true -Confirm:$false

vcenter-high-availablity-apis-using-powercli-6

VUM UMDS Docker Container for vSphere 6.5

Early last week, I had published an article on how to automate the deployment of VUM's Update Manager Download Service (UMDS) in vSphere 6.5 for an Ubuntu 14.04 distribution. The interesting backstory to that script is that it started from a Docker Container that I had initially built for the VUM UMDS. I found that being able to quickly spin up UMDS instance using a Docker Container purely from a testing standpoint was much easier than needing to deploy a full VM, especially as I have Docker running on my desktop machines. Obviously, there are limitations with using a Docker Container, especially if you plan to use UMDS for a longer duration and need persistence. However, for quick lab purposes, it may just fit the bill and even with Docker Containers, you can use Docker Volumes to help persist the downloaded content.

You can find the Dockerfile and its respective scripts on my Github repo here: https://github.com/lamw/vum-umds-docker

Below are the instructions on how to use the VUM UMDS Docker Container.

Continue reading

vCommunity "shorts" on their experiences w/the VCSA Migration

The feedback from our customers on both the initial release of the vCenter Server Appliance (VCSA) Migration Tool (vSphere 6.0 Update 2m) as well as the updated version included in the release of vSphere 6.5 has just been absolutely fantastic! The feedback has not only been positive in terms of customers experience with using the Migration Tool to go from a Windows-based vCenter Server to the VCSA, but also with their experience with the VCSA itself which has come a long from when it was first released back with vSphere 5.0.

As with any customer feedback (good as well as the bad), I share this feedback directly with the Engineering/Product teams so that they know which areas customers have found useful and which areas we can still improve upon. One source of customer feedback which I see quite a bit of discussions on regarding the VCSA Migration Tool is on Twitter and being an active user myself, it is also makes it quite easy to collect and share this feedback internally. I even created the #migrate2vcsa hashtags a few years back to make it easy for customers to provide feedback for all things related to the VCSA Migration.

Most recently, I was looking for a better way to share as well as aggregate some of the feedback from Twitter regarding the VCSA Migration Tool. Instead of manually tracking individual tweet links via an email or document, I wanted to anyone to be able to get a quick glance at the overall feedback. I started to look around and came across an interesting SaaS solution called Storify which allows you to tell "stories" by using content from various Social Media sources such as blog posts, Youtube or Twitter for example.

Continue reading

KMIP Server Docker Container for evaluating VM Encryption in vSphere 6.5

There are a number of vSphere Security enhancements that were introduced in vSphere 6.5 including the much anticipated VM Encryption feature. To be able to use the new VM Encryption feature, you will need to first setup a Key Management Interoperability Protocol (KMIP) Server if you do not already have one and associate it with your vCenter Server. There are plenty of 3rd party vendors who provide KMIP solutions that interoperate with the new VM Encryption feature, but it usually can take some time to get access to product evaluations.

During the vSphere Beta, VMware had provided a sample KMIP Server Virtual Appliance based on PyKMIP, which allowed customers to quickly try out the new VM Encryption feature. Many of you have expressed interest in getting access to this appliance for quick evaluational purposes and the team is currently working on providing an updated version of the appliance for customers to access. In the mean time, for those who can not wait for the appliance or would like an alternative way of quickly standing up a sample KMIP Server, I have created a tiny (163 MB) Docker Container which can be easily spun up to provide the KMIP services. I haver published the Docker Container on Docker Hub at lamw/vmwkmip. The beauty of the Docker Container is you do not need to deploy another VM and for resource constrained lab environments or quick demo purposes, you could even run it directly on the vCenter Server Appliance (VCSA) as shown here, obviously not recommended for production use.

The Docker Container bundles the exact same version of PyKMIP that will be included in the virtual appliance, this is just another consumption mechanism. It is also very important to note that you should NOT be using this for any production workloads or any VMs that you care about. For actual production deployments of VM Encryption, you should be leveraging a production grade KMIP Server as PyKMIP stores the encryption keys in memory and will be lost upon a restart. This will also be true even for the virtual appliance, so this is really for quick evaluational purposes.

Note: The version of PyKMIP is a modified version and VMware plans to re-contribute their changes back to the PyKMIP open-source project so others can also benefit.

Below are the instructions on using the KMIP Server Docker Container and how to configure it with your vCenter Server. I will assume you have worked with Docker before, if you have not, please have a look at Docker online resources before continue further or wait for the virtual appliance to be posted.

Continue reading

VCSA 6.5 CLI Installer now supports new ovftool argument pass-through feature

I had recently discovered a really cool new feature that has been added into the vCenter Server Appliance (VCSA) 6.5 CLI Installer while helping out a fellow colleague. For those of you who have not worked with the VCSA before, you can deploy it using one of two methods: 1) Guided UI Installer 2) Scripted CLI installer. The latter approach is great for Automation purposes as well as being able to quickly spin up a new VCSA as the UI wizard can get tedious once you have done it a few times. The VCSA CLI Installer reads in a JSON configuration file which defines what you will be deploying whether that is an Embedded, PSC or VC node and its respective configuration (networking, password, etc).

In VCSA 6.5, the CLI Installer introduces a new option in the JSON schema called ovftool.arguments. Here is the description of what this new option provides:

Use this subsection to add arbitrary arguments to the OVF Tool
command that the script generates. You do not need to fill it out in
most circumstances.

First of all, I just want to mention that this option should not be required for general deployments but it may come in handy for more advanced use cases. Behind the scenes, the CLI Installer takes the human readable JSON and translates that to a set of OVF properties that are then sent down to ovftool for deployment. Not every single option is implemented in the JSON and for good reason as those level of details should be abstracted away from the end users. However, there may be cases where you may need to invoke a specific configuration or trigger a specific ovftool option and this would allow you to provide what I am calling a "pass-through" to ovftool.

Let me give you one concrete example on how this could be useful and how we can take advantage of this new capability. Since the release of VCSA 6.0, when you enable SSH and you login, you will notice that you are not placed in a regular bash shell but rather a restricted appliancesh interface. From an Automation standpoint, it was some what painful if you wanted to change the default as this feature is not implemented within the JSON configuration file. This meant that if you wanted the bash shell to be the default, you had to either change it manually as part of a post-deployment or you would have to by-pass the native CLI Installer and manually reverse engineer the required set of OVF properties needed for the deployment which is also not ideal.

Continue reading