• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

VSAN 6.1

Docker Container for the Ruby vSphere Console (RVC)

11/08/2015 by William Lam 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, VSAN, vSphere 6.0 Tagged With: container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, vcsa, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

Automating full configuration of a VSAN Stretched Cluster using RVC

10/23/2015 by William Lam Leave a Comment

A couple of weeks back I had spent some time setting up several VSAN Stretched Clusters in my lab for some testing and although it was extremely easy to setup using the vSphere Web Client, I still prefer to stand up the environment completely automated 🙂

In looking to automate the VSAN Stretched Cluster configuration, I was interested in something that would pretty much work out of the box and not require any additional download or setup. The obvious answer would be to use the Ruby vSphere Console (RVC) is a really awesome tool that is available as part of vCenter Server included in both Windows vCenter Server and the VCSA.

For those of you who have not used RVC before, I highly recommend you give it a try and you can take a look at this article to see some of the cool features and benefits. I am making use of the RVC script option which I have written about in the past here to perform the VSAN Stretched Configuration. One of the new RVC namespaces that have been introduced in vSphere 6.0 Update 1 is the vsan.stretchedcluster.* commands and the one we are specifically interested in is the vsan.stretchedcluster.config_witness command.

There are a couple of things the script expects from an environment setup, so I will just spend a few minutes covering the pre-reqs and the assumptions before diving into the script. I will assume you already have a vCenter Server deployed and configured with an empty inventory. I also assume you have already deployed at least two ESXi hosts and a VSAN Witness VM that meets all the VSAN pre-reqs like at least one VSAN enabled VMkernel interface and associated disk requirements. Below is a screenshot of the vSphere Web Client of the initial environment.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-0
Next, we will need to download the RVC script deploy_stretch_cluster.rb and upload that to your vCenter Server. Before you can execute the script, you will need to edit the script and adjust the variable names based on your environment. Once you have saved the changes, you can then run the RVC script by running the following command:

rvc -s deploy_stretch_cluster.rb [VC-USERNAME]@localhost

Here is a screenshot of running the script on the VCSA using Nested ESXi VMs + VSAN Witness VM for the Stretched Clustering configuration:

aautomate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-1
If everything executed successfully, you should see a "Task result: success" which signifies that the VSAN Witness VM was successfully added to the VSAN Stretched Cluster. If we now refresh the vSphere Web Client and under the Fault Domains configurations in the VSAN Cluster, we now see both our 2-Node VSAN Cluster and the VSAN Witness VM.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-2

Hopefully this script can also benefit others who are interested in quickly standing up a VSAN Stretched Cluster, especially for evaluation or testing purposes. Enjoy getting your VSAN on!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, VSAN, vSphere 6.0 Tagged With: ruby vsphere console, rvc, stretched cluster, VSAN, VSAN 6.1

Erasing existing disk partitions now available in the vSphere Web Client (vSphere 6.0 Update 1)

09/29/2015 by William Lam 9 Comments

One of the primary challenges when trying re-purpose existing storage devices is ensuring that all data and existing partitions have been completely removed. Often times, customers end up resorting to third-party tools like GParted which requires you to boot your server into the LiveCD before you can remove the existing partitions. This is less than ideal, especially if you need to perform this operation across multiple systems.

For customers who wish to re-purpose their existing storage devices for other use, including VSAN, there is now a new UI option in the vSphere Web Client introduced in vSphere 6.0 Update 1 to help assist with this procedure. I had not seen anyone talk about this feature yet and figure I would share some details as this is something I have heard customers ask for in the past. You can find this new option (icon with disk and eraser) by clicking onto a specific ESXi host and then selecting the Manage->Storage Adapters and then highlighting the specific storage device you wish to erase as seen in the screenshot below.

erase-disk-partition-in-vsphere-web-client-0
Once the erase partition icon or action is selected, you will then be presented with a summary of the existing partitions on the disk and then prompted to confirm that you wish to delete ALL partitions on the disk.

erase-disk-partition-in-vsphere-web-client-1
After the operation has successfully completed, you can now re-purpose the storage device for other use like VSAN!

For those of you who are interested from an Automation standpoint, this UI operation actually makes use of an existing vSphere API that has been for quite some time called updateDiskPartitions() under the StorageSystem manager of an ESXi host. To erase all partitions, you simply pass in an empty spec to the API method.

In addition, I also want to quickly mention that you will also have the ability to edit and erase existing disk partitions using the ESXi Embedded Host Client Fling which will be available in a future update. Below is a quick screenshot on what that would look like. 

erase-disk-partition-in-vsphere-web-client-2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, VSAN, vSphere Web Client Tagged With: partition, VSAN, VSAN 6.1, vSphere 6.0 Update 1, vSphere API, vsphere web client, web client

Override default VSAN Maintenance (decommission) Mode in VSAN 6.1

09/14/2015 by William Lam Leave a Comment

Earlier this year, there was an interesting use case that was brought up from a customer regarding the use of vSphere Update Manager (VUM) and VSAN enabled ESXi hosts. Everything was working from a functional standpoint, but the customer wanted a way to control the default VSAN decommission mode which specifies how the data should be moved, if at all when a host is placed into maintenance mode. There are three supported options which includes Ensure Accessibility (default), Evacuate All Data and No Action. Depending on the customer and their use case, there may be valid reasons to use one or the other. For example, if I am shutting down my entire VSAN cluster for some hardware upgrade, I probably do not want any of my data to be migrated and the No Action setting would be acceptable. During an upgrade or patching an of ESXi host, some customers have expressed that they would prefer to leverage the Evacuate All Data setting which is perfectly fine, of course the maintenance mode would take long as all the dat must be migrated off the host first.

Prior to VSAN 6.1 (included in the vSphere 6.0 Update 1 release), it was not possible to override the default VSAN maintenance mode (decommission mode) option which defaults to Ensure Accessibility. This was a problem because if you decided you wanted to use a different option, there would be some manual intervention required from the user when using VUM. The workaround for the customer would be to either manually or using the vSphere API to automate the ESXi host maintenance mode operation and specify the decommission mode type before VUM would take over and update the host. Not an ideal solution but would work if you needed to override the default.

I thought this would be a nice feature enhancement to be able to override the default VSAN maintenance mode option which could vary from customer to customer depending on their use case. I got in touch with one of the VSAN Engineers to discuss the use case in more detail and he agreed that it would be useful to expose this type of a capability. In VSAN 6.1, there is now a new ESXi Advanced Setting called DefaultHostDecommissionMode which allows you to specify the default VSAN maintenance mode behavior.

vsan-6.1-decomission-mode-0
Below is a table of the three available options (ensureAccessibility is default) that can be configured:

VSAN Decommission Mode Value  Description
ensureAccessibility  VSAN data reconfiguration should be performed to ensure storage object accessibility
evacuateAllData  VSAN data evacuation should be performed such that all storage object data is removed from the host
noAction  No special action should take place regarding VSAN data

This ESXi Advanced Setting can also be retrieved and configured using ESXCLI as well as the vSphere API.

To retrieve the current VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced list -o /VSAN/DefaultHostDecommissionMode

To configure the default VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced set -o /VSAN/DefaultHostDecommissionMode -s [DECOMISSION_MODE]

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXCLI, ESXi, VSAN, vSphere 6.0 Tagged With: DefaultHostDecommissionMode, esxi 6.0, maintenance mode, Virtual SAN, VSAN, VSAN 6.1, vSphere 6.0 Update 1

How to deploy and run the VSAN 6.1 Witness Virtual Appliance on VMware Fusion & Workstation?

09/11/2015 by William Lam 18 Comments

One of the most exciting new feature in VSAN 6.1 is the new Stretched Clustering capability which also provides support for a 2-Node ROBO deployment. If you are interested in learning more about the new VSAN 6.1 capabilities, be sure to check out Duncan's blog post here as well as a video on how to configure the new VSAN Stretched Clustering here. Like many of you, I am sure you are looking forward to giving both vSphere 6.0 Update 1 as well as the new VSAN 6.1 capabilities a spin in your home lab or development environment. By now, you probably know how easy it is to run Nested ESXi on top of your existing vSphere environment. However, not everyone has access to a vSphere environment. The next best thing is using VMware Fusion and Workstation which also supports Nested ESXi and for many of our customers and field, it is a great solution as it allows you to easily play with all the VMware goodies while you are on the go, especially useful if you travel frequently.

I was interested in setting up a 2-Node VSAN configuration and as part of the setup, you will also need to deploy the new VSAN Witness Virtual Appliance. I wanted to see what it would take to deploy the VSAN Witness Appliance onto VMware Fusion and Workstation and after a bit of exploration, hair pulling and OVF hackery, I was able to finally work out the process as shown in the steps below.

Disclaimer: This is not officially supported by VMware, please use this only for evaluational and testing purposes.

Note: If you plan on deploying the VSAN Witness Appliance directly to an ESXi host, you can use the injectOvfEnv method shown here.

Step 1 - Download the VSAN 6.1 Witness Virtual Appliance OVA from here

Step 2 - We need to make a few minor adjustments to the OVF file before we can import it into VMware Fusion/Workstation. Before we do so, we need to first convert the OVA to an OVF using ovftool.

Here is an example of running this on Mac OS X system:

/Applications/VMware\ OVF\ Tool/ovftool VMware-VirtualSAN-Witness-6.0.0.update01-3029758.ova VMware-VirtualSAN-Witness-6.0.0.update01-3029758.ovf

run-vsan-6.1-witness-virtual-appliance-on-vmware-fusion-workstation-0
Once the conversion has completed, you should see a total of 8 files (6 VMDK files, 1 manifest & 1 OVF file). Before moving onto the next step, you will need to delete the manifest file (extension ending in .mf). We need to do this because the checksum will no longer be valid after we edit the OVF and the upload will fail.

Step 3 - The first edit that we need to make in the OVF is to change the required OVF parameter for specifying the password to the VSAN Witness Appliance from "true" to "false" since both Fusion/Workstation do not support OVF properties.

Change the following from

<ProductSection ovf:class="vsan" ovf:required="true">

to

<ProductSection ovf:class="vsan" ovf:required="false">

Step 4 - The final edit that we need to make in the OVF is to specify the deployment size (tiny, medium or large) for the Witness VM. By default, it will use the medium option. Below is a quick table of the deployment size and for evaluation/testing purposes, I suspect most you will want to stick with the "tiny" configuration.

Deployment Size vCPU vMEM vDISKs Components
tiny 2 8GB 1x8GB 1x15GB 1x10GB 750
medium 2 16GB 1x8GB 1x350GB 1x10GB 22K
large 2 32GB 1x8GB 1x350GB 1x10GB 45K

The way this is specified in the OVF is by adding ovf:default="true" to the specific deployment size. As mentioned, by default this is set to the "medium" deployment size which looks like the following:

<Configuration ovf:default="true" ovf:id="normal">

If you wish to change this to some other deployment size, you will need to move the ovf:default="true" entry to the deployment size you wish to use. In our case, we will move it to "tiny" size by adding the following

<Configuration ovf:default="true" ovf:id="tiny">

Step 5 - Now that we are done with the OVF surgery, we can now import our VSAN Witness OVF into VMware Fusion or Workstation using the "Import" option. Ensure you select "Customize" option after the import has completed to prevent the VM from automatically powering on. This is very important because we still need to add one final configuration to the VMX file for the appliance to be properly configured.

run-vsan-6.1-witness-virtual-appliance-on-vmware-fusion-workstation-1
Step 6 - The last and final step is to add a VMX entry which will configure the root password for VSAN Witness Appliance. This is not necessary when deploying to vCenter Server which has OVF support, but is a problem when deploying VMware Fusion, Workstation and even ESXi. You will need to add the following entry to the VMX file and be sure to replace the password of your choice which is highlighted below in blue.

guestinfo.ovfEnv = "<?xml version='1.0' encoding='UTF-8'?><Environment xmlns='http://schemas.dmtf.org/ovf/environment/1' xmlns:oe='http://schemas.dmtf.org/ovf/environment/1'><PropertySection><Property oe:key='vsan.witness.root.passwd' oe:value='vmware123'/></PropertySection></Environment>"

Step 7 - You are now ready to power on your VSAN Witness and if everything was configured correctly, you should be able to login to DCUI of the VSAN Witness which as you probably have guessed by now is running Nested ESXi 🙂

run-vsan-6.1-witness-virtual-appliance-on-vmware-fusion-workstation-2
Here is a quick screenshot of configuring my very first 2-Node / Stretched VSAN Cluster which is comprised of 2 Nested ESXi VMs running on my Mac Mini which is running vSphere 6.0 Update 1 and my VSAN Witness Appliance running on my iMac using VMware Fusion. If you need additional instructions on configuring either a 2-Node / Stretched VSAN Cluster, be sure to check out the how-to video here. I plan to share this feedback internally with Engineering and hopefully deploying VSAN Witness in the future can be much more easier when it comes to running it on VMware Fusion and Workstation.

run-vsan-6.1-witness-virtual-appliance-on-vmware-fusion-workstation-31

 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, Home Lab, OVFTool, VSAN, Workstation Tagged With: guestinfo.ovfEnv, ova, ovf, ovftool, Virtual SAN, VSAN, VSAN 6.1, vSphere 6.0 Update 1, witness

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy