• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

rvc

Reporting vSAN Object distribution across vSAN Disk Groups using PowerCLI

09/26/2017 by William Lam Leave a Comment

Several weeks back, I was cleaning up my scratch space, where I store all my random code snippets for various questions which I receive on a regular basis and I came across a nifty little script that I had put together for a particular customer request. I had completely forgotten about it and I thought it could come in handy for some folks who might be curious in how their current vSAN Objects are currently being distributed across all vSAN Disk Groups within a vSAN Cluster.

RVC already provides a nice command called vsan.check_limits which gives you a break down of the number of components across all disks within a vSAN Cluster as shown in the screenshot below.


However, in the case of this particular customer, they wanted the break down on a per Disk Group level rather than individual disks.

Luckily, all of this information is already exposed using the vSAN Management APIs, you simply just need to aggregate it one level up. With that, I created a PowerCLI script called VSANObjectDistribution.ps1 which allows you to provide the name of a vSAN Cluster and it will automatically provide you with both the number of components distributed across the different vSAN Disk Groups as well as the amount of storage consumed by these components.

Here is a screenshot for a 3-Node vSAN Cluster where each ESXi host contains two vSAN Disk Groups:


Since there is no actual number for a vSAN Disk Group, by default, I output the Canonical Disk Name of the "Cache" device for the given vSAN Disk Group so you can map it back.

If you prefer to see the vSAN UUID for the "Cache" device instead, you can simply set the -ShowvSANID parameter to true as shown in the screenshot below.


To correlate back the specific vSAN Disk Group, you simply select a particular vSAN Disk Group for the ESXi host you are interested in. At the bottom, add "vSAN UUID" column highlighted in orange and you can then compare either that ID or Canonical Disk Name highlighted in blue.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VSAN Tagged With: components, PowerCLI, rvc, VSAN, vsan.check_limits

How to convert vSAN RVC commands into PowerCLI and/or other vSphere SDKs?

06/27/2017 by William Lam 1 Comment

A common request that I see come up from our field and customers is getting specific vSAN Ruby vSphere Console (RVC) commands to be made more generally available in other vSphere CLI/SDKs like PowerCLI for example. Funny enough, many folks do not realize that this functionality has been there since vSAN 6.2 and specifically with the release of the vSAN Management APIs which exposes all vSAN functionality programmatically whether you are consuming it from the vSphere Web Client, Embedded Host Client or from RVC. All of these tools have been built using the vSAN Management APIs.

Although we have supported a variety of vSAN Management SDKs (language bindings) since its first release, I will say that PowerCLI consumption of the vSAN Management API has only been made available recently with PowerCLI 6.5.1 and it supports the latest release of vSAN 6.6 and can go all the way back to vSAN 6.2. Even with PowerCLI support, I still continue to see vSAN RVC requests come up time after time and it seems like folks still have not made the connection that RVC is just simply using the vSAN Management API just like UI does.

What is even more interesting is that the source code of RVC can be viewed by anyone to see how each command is implemented and which APIs are being used. RVC is built using rbvmomi (vSphere SDK for Ruby) which provides access to both the vSphere and vSAN Management APIs. Given the number of requests that I have seen, I am going to assume that this is not common knowledge and I figured the best way to show how this work is with a real world example. I decided to take the vsan.check_limits RVC command and create an equilvenet PowerCLI script that uses the vSAN Management API to provide the exact same information.

Note: You will need to know how to use the vSphere/vSAN Management APIs and knowing a little of Ruby can also help. If you are new to vSAN Management APIs, have a look at this blog post on how to get started.

Here is a screenshot of running the vsan.check_limits RVC command:


Here is a screenshot of running the PowerCLI script that I have created:


As you would expect, the data is exactly the same since they both consume the same underlying vSAN Management API.

So, how do we get started?

[Read more...] about How to convert vSAN RVC commands into PowerCLI and/or other vSphere SDKs?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VSAN, vSphere Web Client Tagged With: PowerCLI, ruby vsphere console, rvc, Virtual SAN, VSAN

Docker Container for the Ruby vSphere Console (RVC)

11/08/2015 by William Lam 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, VSAN, vSphere 6.0 Tagged With: container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, vcsa, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

Automating full configuration of a VSAN Stretched Cluster using RVC

10/23/2015 by William Lam Leave a Comment

A couple of weeks back I had spent some time setting up several VSAN Stretched Clusters in my lab for some testing and although it was extremely easy to setup using the vSphere Web Client, I still prefer to stand up the environment completely automated 🙂

In looking to automate the VSAN Stretched Cluster configuration, I was interested in something that would pretty much work out of the box and not require any additional download or setup. The obvious answer would be to use the Ruby vSphere Console (RVC) is a really awesome tool that is available as part of vCenter Server included in both Windows vCenter Server and the VCSA.

For those of you who have not used RVC before, I highly recommend you give it a try and you can take a look at this article to see some of the cool features and benefits. I am making use of the RVC script option which I have written about in the past here to perform the VSAN Stretched Configuration. One of the new RVC namespaces that have been introduced in vSphere 6.0 Update 1 is the vsan.stretchedcluster.* commands and the one we are specifically interested in is the vsan.stretchedcluster.config_witness command.

There are a couple of things the script expects from an environment setup, so I will just spend a few minutes covering the pre-reqs and the assumptions before diving into the script. I will assume you already have a vCenter Server deployed and configured with an empty inventory. I also assume you have already deployed at least two ESXi hosts and a VSAN Witness VM that meets all the VSAN pre-reqs like at least one VSAN enabled VMkernel interface and associated disk requirements. Below is a screenshot of the vSphere Web Client of the initial environment.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-0
Next, we will need to download the RVC script deploy_stretch_cluster.rb and upload that to your vCenter Server. Before you can execute the script, you will need to edit the script and adjust the variable names based on your environment. Once you have saved the changes, you can then run the RVC script by running the following command:

rvc -s deploy_stretch_cluster.rb [VC-USERNAME]@localhost

Here is a screenshot of running the script on the VCSA using Nested ESXi VMs + VSAN Witness VM for the Stretched Clustering configuration:

aautomate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-1
If everything executed successfully, you should see a "Task result: success" which signifies that the VSAN Witness VM was successfully added to the VSAN Stretched Cluster. If we now refresh the vSphere Web Client and under the Fault Domains configurations in the VSAN Cluster, we now see both our 2-Node VSAN Cluster and the VSAN Witness VM.

automate-the-full-configuration-of-vsan-stretched-cluster-using-rvc-2

Hopefully this script can also benefit others who are interested in quickly standing up a VSAN Stretched Cluster, especially for evaluation or testing purposes. Enjoy getting your VSAN on!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, VSAN, vSphere 6.0 Tagged With: ruby vsphere console, rvc, stretched cluster, VSAN, VSAN 6.1

How to download offline VSAN HCL file for VSAN Health Check Plugin?

05/16/2015 by William Lam 4 Comments

One of the coolest features in the new VSAN Health Check Plugin is the automatic verification of your underlying hardware (hosts, disks, storage controller & drivers) by automatically checking it against VMware's VSAN HCL (Hardware Compatibility List).

download-vsan-offline-hcl-file-0
The VSAN HCL database can either be downloaded automatically from VMware.com or manually uploaded if you do not have direct or proxy internet access. There was a question this morning on Twitter asking where the offline VSAN HCL file be downloaded from? I was actually curious as well and looking at Cormac Hogan's excellent VSAN Health Check documentation, I found the answer at the very end of the document 🙂

http://partnerweb.vmware.com/service/vsan/all.json

To download the offline VSAN HCL file which is actually is just a JSON file, you just need to load the above URL into a web browser and then save the file.

download-vsan-offline-hcl-file-1
After you have downloaded the VSAN HCL file, you can either upload using the vSphere Web Client under the "Health" section of the VSAN Health Plugin or you using the following RVC command and specifying the path to the file:

vsan.health.hcl_update_db /localhost/ -l /root/all.json

As a bonus, I also had some fun parsing the VSAN HCL JSON file. Below is a graph that I was able to generate after extracting some useful information using the following script found here.

vsan-hcl-controllers

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VSAN, vSphere 6.0 Tagged With: hcl, rvc, VSAN

Why is my VSAN Component maximum showing less than 3000?

01/28/2015 by William Lam Leave a Comment

This is a question that I have seen come up on several occasions in both the VMTN Community forums as well as in our internal Socialcast group. I have not seen anyone blog about this topic yet and figure I would share the answer since this was a question I had asked myself when I had initially setup VSAN. If you are not familiar with VSAN Components, I highly recommend you check out Cormac Hogan's blog article VSAN Part 4: Understanding Objects and Components.

In vSphere 5.5 Update 1, the maximum number of supported components for VSAN is 3000 which is a per ESXi host maximum. What some folks are noticing when they run the RVC vsan.check_limits command on their VSAN Cluster, they are finding out that the maximum is coming up much lower as seen in the example below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/localhost/VSAN-Datacenter/computers> vsan.check_limits VSAN-Cluster/
2015-01-28 15:34:25 +0000: Gathering stats from all hosts ...
2015-01-28 15:34:27 +0000: Gathering disks info ...
+--------------------------------+-------------------+-------------------------------------------+
| Host                           | RDT               | Disks                                     |
+--------------------------------+-------------------+-------------------------------------------+
| vesxi55-3.primp-industries.com | Assocs: 30/20000  | Components: 8/750                         |
|                                | Sockets: 17/10000 | naa.6000c2932c3f51f04e4cd395f4a11752: 8%  |
|                                | Clients: 3        | naa.6000c294f6496a99ad756857b9b06f01: 0%  |
|                                | Owners: 5         |                                           |
| vesxi55-2.primp-industries.com | Assocs: 10/20000  | Components: 8/750                         |
|                                | Sockets: 13/10000 | naa.6000c294bde5987d60398e0305978b00: 9%  |
|                                | Clients: 0        | naa.6000c292a964255b82410099360a9b27: 0%  |
|                                | Owners: 0         |                                           |
| vesxi55-1.primp-industries.com | Assocs: 24/20000  | Components: 8/750                         |
|                                | Sockets: 15/10000 | naa.6000c298b69006b820e367b5fde97cbf: 11% |
|                                | Clients: 3        | naa.6000c29db3f272cfb7fb4d08bffad3ab: 0%  |
|                                | Owners: 3         |                                           |
+--------------------------------+-------------------+-------------------------------------------+

The reason for this is actually due to the amount of physical memory available to each ESXi host. If you are running VSAN in a Nested ESXi environment like I am in the example above, I only have 8GB of memory configured for each ESXi host. The number of supported VSAN Components will definitely differ from an actual physical host with more memory and the nice thing about vsan.check_limits command is that it is dynamic in nature based on the actual available resources. Funny enough, the majority of the questions actually came from folks who ran VSAN in a Nested Environment, so this would explain why this question keeps popping up.

If I run the same RVC command on an environment where VSAN was running on real hardware with a decent amount of memory which most modern systems these days have, then I can see the VSAN Component maximum is properly displaying the 3000 limit as expected in the example below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/localhost/datacenter01/computers> vsan.check_limits vsan-cluster01/
2015-01-28 15:28:47 +0000: Querying limit stats from all hosts ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx021.vmwcs.com (may take a moment) ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx022.vmwcs.com (may take a moment) ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx024.vmwcs.com (may take a moment) ...
2015-01-28 15:28:51 +0000: Done fetching VSAN disk infos
+---------------------------+--------------------+---------------------------------------------------------------------------------+
| Host                      | RDT                | Disks                                                                           |
+---------------------------+--------------------+---------------------------------------------------------------------------------+
| esx021.vmwcs.com          | Assocs: 223/45000  | Components: 97/3000                                                             |
|                           | Sockets: 132/10000 | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC061926: 18% |
|                           | Clients: 14        | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C69____: 0%   |
|                           | Owners: 29         |                                                                                 |
| esx022.vmwcs.com          | Assocs: 252/45000  | Components: 96/3000                                                             |
|                           | Sockets: 143/10000 | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017CA2____: 0%   |
|                           | Clients: 14        | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC050466: 19% |
|                           | Owners: 38         |                                                                                 |
| esx024.vmwcs.com          | Assocs: 197/45000  | Components: 96/3000                                                             |
|                           | Sockets: 122/10000 | t10.ATA_____ST2000DL0032D9VT166__________________________________5YD73PRP: 8%   |
|                           | Clients: 17        | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C5B____: 0%   |
|                           | Owners: 22         |                                                                                 |
+---------------------------+--------------------+---------------------------------------------------------------------------------+

The lesson here is that even though I am a huge supporter of using Nested ESXi to learn about new products, features and how they work from a functional perspective, there is no amount of Nested ESXi testing that can ever replace actual testing of real hardware.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, VSAN, vSphere 5.5 Tagged With: components, rvc, Virtual SAN, VSAN, vsan.check_limits

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy