As some of you may know, I have been spending some time with the new vCenter Server High Availability (VCHA) feature that was introduced in vSphere 6.5. In fact, I had even published an article a few weeks back on how to enable the new vCenter Server High Availability (VCHA) feature with only a single ESXi host which allowed me to explore some of the new VCHA APIs without needing a whole lot of resources to start with, obviously, you would not do this in production 🙂
For those of you who are not familiar with the new VCHA feature which is only available with the vCenter Server Appliance (VCSA), Feidhlim O'Leary has an excellent write up that goes over the details and even provides demo videos covering both the "Basic" and "Advanced" workflows of VCHA. I highly recommend you give his blog post a read before moving forward as this article will assume you understand how VCHA works.
In playing with the new VCHA APIs, I decided to create a few VCHA functions which I thought would be useful to have as a PowerCLI module for others to use and also try out. With that, I have published my VCHA.psm1 module on the PowerCLI Community Repo on Github which includes the following functions:
|Get-VCHAConfig||Retrieves the VCHA Configuration|
|Get-VCHAClusterHealth||Retrieve the VCHA Cluster Health|
|Set-VCHAClusterMode||Sets the VCHA Cluster Mode (Enable/Disable/Maintenance)|
|New-VCHABasieConfig||Creates a new "Basic" VCHA Cluster|
|Remove-VCHACluster||Destroys a VCHA Cluster|
As noted earlier, VCHA Cluster can be deployed using either a "Basic" or "Advanced" workflow. The VCHA PowerCLI module currently only implements the "Basic" workflow. For those interested in the Advanced workflow, you are more than welcome to extend the script but note that it does require leveraging additional VCHA APIs than the ones used in the Basic workflow. Make sure you also have PowerCLI 6.5 R1 installed before trying to use the module.
Here is a screenshot of my vSphere 6.5 environment which has a self-managed VCSA (which will be required for the Basic workflow) OR you can have management cluster that hosts the VCSA you wish to enable VCHA on as long as it is joined to the same SSO Domain. For management clusters that do not share the same SSO Domain with the VCSA that you want to enable VCHA on, then you will have to use the Advanced workflow. You must also enable SSH on the VCSA before attempting to configure VCHA or else you will run into an error. This is something VCHA itself requires and has nothing to do with the script, you will see this behavior regardless of using the UI or API. SSH can be disabled after VCHA is setup.
- Name of VCSA VM
- Name of the VCHA Network (Virtual Portgroup or Distributed Portgroup)
- Active VCSA HA IP Address / Netmask
- Passive VCSA HA IP Address / Netmask
- Witness HA IP Address / Netmask
- Name of the Passive and Witness vSphere Datastore to use
- vSphere Credentials to the VCSA
Here is an example command for my environment:
New-VCHABasicConfig -VCSAVM "vcenter65-1" -HANetwork "DVPG-VCHA-Network" `
-ActiveHAIp 192.168.1.70 `
-ActiveNetmask 255.255.255.0 `
-PassiveHAIp 192.168.1.71 `
-PassiveNetmask 255.255.255.0 `
-WitnessHAIp 192.168.1.72 `
-WitnessNetmask 255.255.255.0 `
-PassiveDatastore "vsanDatastore" `
-WitnessDatastore "vsanDatastore" `
-VCUsername "firstname.lastname@example.org" `
Depending on your compute and storage resources, this can take some time while the Passive and Witness VCSA is being cloned from the Active VCSA. Once the operation has completed, you can refresh the vCenter HA tab in the vSphere Web Client and you should see that VCHA is now enabled as shown in the screenshot below.
To get the VCHA Configuration, you can use the Get-VCHAConfig command.
The VCHA Cluster can also be placed into different "Modes" such as Enabled, Disabled or Maintenance. To do so, you can use the Set-VCHAClusterMode which includes boolean flags for each of the modes. For example, if you wanted to disable the VCHA Cluster, you would run the following command:
Set-VCHAClusterMode -Disabled $true
Finally, if you wish to destroy the VCHA Cluster, there is the Remove-VCHAConfig command which supports two additional flags. One that by-passes confirmation that you wish to destroy the VCHA Cluster (basically a safety protection) and the other whether or not to delete the VMs after the VCHA Cluster has been destroyed. By default, the VCHA APIs does not offer a native way of automatically deleting the VMs and if you have used the UI, you will see the UI adds this additional functionality which I have also done so in the VCHA module. If either flags are committed, then you will be prompted by the script.
Here is an example of automatically confirming to destroy the VCHA Cluster as well as deleting the VMs afterwards:
Remove-VCHAConfig -DeleteVM $true -Confirm:$false