• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

performance

Using the vSphere API to remotely generate ESXi performance support bundles

06/14/2016 by William Lam 2 Comments

This is a follow-up post from my previous article on how to run a script using a vCenter Alarm action in the vCenter Server Appliance (VCSA). What I had demonstrated was using a pyvmomi (vSphere SDK for Python) script to be triggered automatically to generate a VMware Support Bundle for all ESXi hosts for a given vSphere Cluster. The one caveat that I had mentioned in the blog post was that the solution slightly differed from the original request which was to create an ESXi performance support bundle. However, the vSphere API only supports the creation of generic VMware Support Bundle which may not be as useful if you are only interested in collecting more granular performance stats for troubleshooting purposes.

After publishing the article, I had thought about the problem a bit more and realized there is still a way to solve the original request. Before going into the solution, I wanted to quickly cover how you can generate an ESXi Performance Support Bundle which can be done either directly in the ESXi Shell using something like the following:

vm-support -p -d 60 -i 5 -w /vmfs/volumes/datastore1

or you can actually use a neat little trick which I had blogged about here back in 2011 where you can simply open a web browser and run the following:

https://esxi-1.primp-industries.com/cgi-bin/vm-support.cgi?performance=true&interval=5&duration=60

Obviously, the first option is not ideal as you would need to SSH (generally disabled for good security practices) to each and every ESXi host and then manually run the command and copy the support bundle off of each system. The second option still requires going to each and every ESXi host, however it does not require ESXi Shell or SSH access. This is still not ideal from an Automation standpoint, especially if these ESXi hosts are already being managed by vCenter Server.

However, the second option is what gave me the lightbulb idea! I had recalled a couple of years back that I had blogged about a way to efficiently transfer files to a vSphere Datastore using the vSphere API. The solution leveraged a neat little vSphere API method called AcquireGenericServiceTicket() which is part of the vCenter Server sessionManager. Using this method, we can request a ticket for a specific file with a one time HTTP request to connect directly to an ESXi host. This means, I can connect to vCenter Server using the vSphere API, retrieve all ESXi hosts from a given vSphere Cluster and request a one time ticket to remotely generate an ESXi performance support bundle and then download it locally to the VCSA (or any other place that you can run the pyvmomi sample).

Download the pyvmomi script: generate_esxi_perf_bundle_from_vsphere_cluster.py

Here is the sample log output when triggering this script from vCenter Alarm in the VCSA:

2016-06-08 19:29:42;INFO;Cluster passed from VC Alarm: Non-VSAN-Cluster
2016-06-08 19:29:42;INFO;Creating directory /storage/log/esxi-support-logs to store support bundle
2016-06-08 19:29:42;INFO;Requesting Session Ticket for 192.168.1.190
2016-06-08 19:29:42;INFO;Waiting for Performance support bundle to be generated on 192.168.1.190 to /storage/log/esxi-support-logs/vmsupport-192.168.1.190.tgz
2016-06-08 19:33:19;INFO;Requesting Session Ticket for 192.168.1.191
2016-06-08 19:33:19;INFO;Waiting for Performance support bundle to be generated on 192.168.1.191 to /storage/log/esxi-support-logs/vmsupport-192.168.1.191.tgz

I have also created a couple more scripts exercising some additional use cases that I think customers may also find useful. Stay tuned for those additional articles later this week.

UPDATE (06/16/16) - There was another question internally asking whether other types of ESXi Support Bundles could also be generated using this method and the answer is yes. You simply just need to specify the types of manifests you would like to collect, such as HungVM for example.

To list the available Manifests and their respective IDs, you can manually perform this operation once by opening browser specifying the following URL:

https://192.168.1.149/cgi-bin/vm-support.cgi?listmanifests=true

Screen Shot 2016-06-16 at 5.38.08 AM
To list the available Groups and their respective IDs, you can specify the following URL:

https://192.168.1.149/cgi-bin/vm-support.cgi?listgroups=true

Screen Shot 2016-06-16 at 5.45.56 AM
Here is an example URL constructed using some of these params:

https://192.168.1.149/cgi-bin/vm-support.cgi?manifests=HungVM:Coredump_VM%20HungVM:Suspend_VM&groups=Fault%20Hardware%20Logs%20Network%20Storage%20System%20Userworld%20Virtual&vm=FULL_PATH_TO_VM_VMX

The following VMware KB 2005715 may also be useful as it provides some additional examples on using these additional parameters.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, vSphere Tagged With: esxi, performance, python, pyVmomi, support bundle, vm-support, vm-support.cgi, vSphere API

Quick Tip – iPerf now available on ESXi

03/15/2016 by William Lam 14 Comments

The other day I was looking to get a baseline of the built-in ethernet adapter of my recently upgraded vSphere home lab running on the Intel NUC. I decided to use iPerf for my testing which is a commonly used command-line tool to help measure network performance. I also found a couple of articles from well known VMware community members: Erik Bussink and Raphael Schitz on this topic as well which were quite helpful. Erik's article here outlines how to run the iPerf Client/Server using a pair of Virtual Machines running on top of two ESXi hosts. Although the overhead of the VMs should be negligible, I was looking for a way to benchmark the ESXi hosts directly. Raphael's article here looked promising as he found a way to create a custom iPerf VIB which can run directly on ESXi.

I was about to download the custom VIB and I had just remembered that the VSAN Health Check plugin in the vSphere Web Client also provides some proactive network performance tests to be run on your environment. I was curious on what tool was being leveraged for this capability and in doing a quick search on the ESXi filesystem, I found that it was actually iPerf. The iPerf binary is located in /usr/lib/vmware/vsan/bin/iperf and looks to have been bundled as part of ESXi starting with the vSphere 6.0 release from what I can tell.

UPDATE (10/02/18) - It looks like iPerf3 is now back in both ESXi 6.5 Update 2 as well as the upcoming ESXi 6.7 Update 1 release. You can find the iPerf binary under /usr/lib/vmware/vsan/bin/iperf3

One interesting thing that I found when trying to run iPerf in "server" mode is that you would always get the following error:

bind failed: Operation not permitted

The only way I found to fix this issue was to basically copy the iPerf binary to another file like iperf.copy which it would then allow me to start iPerf in "server" mode. You can do so by running the following command on the ESXi Shell:

cp /usr/lib/vmware/vsan/bin/iperf /usr/lib/vmware/vsan/bin/iperf.copy

Running iPerf in "Client" mode works as expected and the copy is only needed when running in "server" mode. To perform the test, I used both my Apple Mac Mini and the Intel NUC which had ESXi running with no VMs.

I ran the iPerf "Server" on the Intel NUC by running the following command:

/usr/lib/vmware/vsan/bin/iperf.copy -s -B [IPERF-SERVER-IP]

Note: If you have multiple network interfaces, you can specify which interface to use with the -B option and passing the IP Address of that interface.

I ran the iPerf "Client" on the Mac Mini by running the following command and specifying the address of the iPerf "Server":

/usr/lib/vmware/vsan/bin/iperf -m -i t300 -c [IPERF-SERVER] -fm

I also disabled the ESXi firewall before running the test, which you can do by running the following command:

esxcli network firewall set --enabled false

Here is a screenshot of my iPerf test running between my Mac Mini and Intel NUC. Hopefully this will come in handy for anyone needing to run some basic network performance tests between two ESXi hosts without having to setup additional VMs.

esxi-iperf

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, vSphere 6.0 Tagged With: esxi, iperf, network, performance, vSphere 6.0 Update 1, vSphere 6.0 Update 2

Simulating vSphere Performance Metrics using VCSIM

04/01/2014 by William Lam 7 Comments

A really useful tool that I leverage from time to time is VCSIM (vCenter Simulator) which can be found within the VCSA (vCenter Server Appliance). VCSIM allows you to easily "simulate" a custom vSphere Inventory that can be used for a variety of use cases including custom reports using the vSphere API/CLIs. There are additional capabilities that VCSIM provides and one that I have not explored much is the simulation of vSphere Performance Metrics. Having received a couple of inquires regarding VCSIM and performance metrics, I figure this would be a good opportunity to explore this feature in a bit more detail.

Disclaimer: This is not officially supported by VMware, use at your own risk.

Before getting starting, you should familiarize yourself with VCSIM by reading these two articles here and here.

By default when running VCSIM, there are no Performance Metrics. If you wish to include it, you will need to ensure the following two lines are added to your VCSIM configuration file:

<perfCounterInfo>vcsim/model/PerfCounterInfo.xml</perfCounterInfo>
<metricMetadata>vcsim/model/metricMetadata.cfg</metricMetadata>

The first file contains the performance metric definitions that are supported and the second file will specify which metrics will be simulated. To demonstrate the performance metric capabilities of VCSIM, I will be using the following configuration files which you can just copy/paste:

vghetto-perf-vcsim.cfg - This will be our VCSIM configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
<simulator>
  <enabled>true</enabled>
  <initInventory>vcsim/model/vghetto-perf-inventory.cfg</initInventory>
  <hostConfigLocation>vcsim/model/hostConfig</hostConfigLocation>
  <perfCounterInfo>vcsim/model/PerfCounterInfo.xml</perfCounterInfo>
  <metricMetadata>vcsim/model/metricMetadata.cfg</metricMetadata>
  <datastore>
     <global>1</global>
     <cluster>2</cluster>
     <local>1</local>
     <prefix>vghetto</prefix>
  </datastore>
</simulator>

vghetto-perf-inventory.cfg - This will be our VCSIM inventory configuration file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<config>
  <inventory>
    <dc>1</dc>
    <host-per-dc>0</host-per-dc>
    <vm-per-host>0</vm-per-host>
    <poweron-vm-per-host>0</poweron-vm-per-host>
    <cluster-per-dc>1</cluster-per-dc>
    <host-per-cluster>2</host-per-cluster>
    <rp-per-cluster>1</rp-per-cluster>
    <vm-per-rp>3</vm-per-rp>
    <poweron-vm-per-rp>3</poweron-vm-per-rp>
    <dv-portgroups>0</dv-portgroups>
  </inventory>
  <prefix>vGhetto-</prefix>
  <worker-threads>1</worker-threads>
  <synchronous>true</synchronous>
</config>

To ensure everything is working, we can start VCIM by issuing the following command on the VCSA:

vmware-vcsim-start /etc/vmware-vpx/vcsim/model/vghetto-perf-vcsim.cfg

If everything is working, you should be able to login using the vSphere Web/C# Client to view the small inventory we just created. If you are able to see the inventory, then go ahead and stop VCSIM by issusing the following command:

vmware-vcim-stop false

Note: To ensure our inventory is not destroyed each time, you should pass in the 'false' flag else it will automatically be deleted each time. This is useful if you want to preserve your inventory on subsequent reboots.

For all Performance Metric configurations, you will only need to edit the metricMetadata.cfg file. VCSIM supports four different types of Stats model:

  1. Constant
  2. Linear
  3. Square
  4. Triangle

We will take a look at Stats model for 2-4 since the Constant is not all that interesting 🙂 For our examples, we will take a look at each Stats model for the datastore.datastoreIops.average metrics.

Linear Stats Model:

vcsim-perf-linear

1
2
3
4
5
6
7
8
9
10
<Metric id="datastore.datastoreIops.average">
  <Instance id="Default"/>
    <StatsModel>
      <Type>Triangle</Type>
      <Values>0,20,10,30,0</Values>
      <Periods>600,300,600,900</Periods>
    </StatsModel>
  </Instance>
  <Instance id="HostDatastore"/>
</Metric>

Square Stats Model:

vcsim-perf-square

1
2
3
4
5
6
7
8
9
10
<Metric id="datastore.datastoreIops.average">
  <Instance id="Default"/>
    <StatsModel>
      <Type>Square</Type>
      <Values>0,10,0,20,0</Values>
      <Periods>300,300,600,600,300</Periods>
    </StatsModel>
  </Instance>
  <Instance id="HostDatastore"/>
</Metric>

Triangle Stats Model:

vcsim-perf-triangle

1
2
3
4
5
6
7
8
9
10
<Metric id="datastore.datastoreIops.average">
  <Instance id="Default"/>
    <StatsModel>
      <Type>Triangle</Type>
      <Values>0,20,10,30,0</Values>
      <Periods>600,300,600,900</Periods>
    </StatsModel>
  </Instance>
  <Instance id="HostDatastore"/>
</Metric>

I thought this was pretty neat that the VCSIM developers included a couple of Stats Models that could be leveraged right out of the box! As you can see, it is pretty easy to enable various performance metrics simply by identifying the metric(s) you are interested in and specifying the Stats Model and then starting up VCSIM. The other neat thing that I have been asked about before is can VCSIM simulate performance metrics for specific vSphere entities? I originally thought the answer was no until I started to play with the performance metric simulator a bit more and realize there is a List property that you can use to specify the specific objects in which you want data to be displayed.

Here is an example of of the same performance metric we have been looking at but only enabling it for two Datastores:

vcsim-perf-list

1
2
3
4
5
6
7
8
9
10
11
<Metric id="datastore.datastoreIops.average">
  <Instance id="List">
    <List>vghettoDS_vGhetto-DC0_C0_0,vghettoDS_vGhetto-DC0_C0_1</List>
    <StatsModel>
      <Type>Triangle</Type>
      <Values>0,20,10,30,0</Values>
      <Periods>600,300,600,900</Periods>
    </StatsModel>
  </Instance>
  <Instance id="HostDatastore"/>
</Metric>

 Note: One thing I noticed while playing with the performance metric simulator is that some times the object in the UI is blank when using the vSphere Web Client. If I use the vSphere C# Client, it is perfectly fine and that is also true if you are using the vSphere API to query for these metrics.

Hopefully this was a good overview of how the VCSIM performance metrics feature works. I know there are a couple of internal folks who have used VCSIM injunction with vCenter Operations and I am also curious to see what other neat uses cases exists for the performance metrics feature. Also, if you have created a really cool metricMetadata.cfg configuration file, feel free to share with the rest of the community!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Not Supported, VCSA, vSphere Tagged With: performance, vcsa, vcsim, vcva

Hidden vCenter Debugging Performance Metrics

08/04/2011 by William Lam 1 Comment

While extracting the new performance metrics in vSphere 5 for a blog post, I came across a metric type that I had never noticed before, vcDebugInfo. These performance metrics seems to deal with some of the internal performance/counters in vCenter such as lock statistics, MoRef (Managed Object Reference) counts, etc. Majority of these metrics are available in either collection level 1 or 4 which is the highest level containing all statistics. VMware's best practice is to only enable collection level 1 or 2, 3 and 4 should only be enabled under VMware supervision for debugging purposes which is most likely when some of these stats may come in handy. So be warn, these are probably not supported by VMware

I found it interesting that these metrics are hidden from the vSphere Client UI, but they can easily be extracted when going through the vSphere API. It is just amazing on all the goodies you can find when going through the APIs 🙂

Metric Stat Level Description
vcDebugInfo
maximum.millisecond.activationlatencystats 4 The latency of an activation operation in vCenter
minimum.millisecond.activationlatencystats 4 The latency of an activation operation in vCenter
summation.millisecond.activationlatencystats 1 The latency of an activation operation in vCenter
maximum.number.activationstats 4 Activation operations in vCenter
minimum.number.activationstats 4 Activation operations in vCenter
summation.number.activationstats 1 Activation operations in vCenter
maximum.millisecond.hostsynclatencystats 4 The latency of a host sync operation in vCenter
minimum.millisecond.hostsynclatencystats 4 The latency of a host sync operation in vCenter
summation.millisecond.hostsynclatencystats 1 The latency of a host sync operation in vCenter
maximum.number.hostsyncstats 4 The number of host sync operations in vCenter
minimum.number.hostsyncstats 4 The number of host sync operations in vCenter
summation.number.hostsyncstats 1 The number of host sync operations in vCenter
maximum.number.inventorystats 4 vCenter inventory statistics
minimum.number.inventorystats 4 vCenter inventory statistics
summation.number.inventorystats 1 vCenter inventory statistics
maximum.number.lockstats 4 vCenter locking statistics
minimum.number.lockstats 4 vCenter locking statistics
summation.number.lockstats 1 vCenter locking statistics
maximum.number.lrostats 4 vCenter LRO statistics
minimum.number.lrostats 4 vCenter LRO statistics
summation.number.lrostats 1 vCenter LRO statistics
maximum.number.miscstats 4 Miscellaneous statistics
minimum.number.miscstats 4 Miscellaneous statistics
summation.number.miscstats 1 Miscellaneous statistics
maximum.number.morefregstats 4 Managed object reference counts in vCenter
minimum.number.morefregstats 4 Managed object reference counts in vCenter
summation.number.morefregstats 1 Managed object reference counts in vCenter
maximum.number.scoreboard 4 Object counts in vCenter
minimum.number.scoreboard 4 Object counts in vCenter
summation.number.scoreboard 3 Object counts in vCenter
maximum.number.sessionstats 4 The statistics of client sessions connected to vCenter
minimum.number.sessionstats 4 The statistics of client sessions connected to vCenter
summation.number.sessionstats 1 The statistics of client sessions connected to vCenter
maximum.number.systemstats 4 The statistics of vCenter as a running system such as thread statistics and heap statistics
minimum.number.systemstats 4 The statistics of vCenter as a running system such as thread statistics and heap statistics
summation.number.systemstats 1 The statistics of vCenter as a running system such as thread statistics and heap statistics
maximum.number.vcservicestats 4 vCenter service statistics such as events, alarms, and tasks
minimum.number.vcservicestats 4 vCenter service statistics such as events, alarms, and tasks
summation.number.vcservicestats 1 vCenter service statistics such as events, alarms, and tasks
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: api, performance, vSphere 4, vSphere 4.1, vSphere 5

New Performance Metrics In vSphere 5

08/03/2011 by William Lam 2 Comments

I recently had to look at some performance metrics in my vSphere 5 lab and I was curious if VMware had documented all the new performance metrics. I headed over to the vSphere 5 API reference guide and to my surprise, they were exactly the same as the vSphere 4 API reference guide. Though looking at the vSphere Client, it was obvious there were new performance metrics for features such as Storage DRS that did not exists in vSphere 4.

Using a similar method in a previous post about Power performance metrics, I extracted all the new metrics in vSphere 5 and created the following table that includes the metric name (rollup,units and internal name), collection level and description of the metric. There are a total of 129 new performance metrics that include Storage DRS and HBR (Host Based Replication).

Hopefully this will be fixed in the API documentation when vSphere 5 GA's as I recalled providing the same feedback during the beta program. 

Metric Stat Level Description
cpu
average.MHz.capacity.provisioned 3 Capacity in MHz of the physical CPU cores
average.MHz.capacity.entitlement 1 CPU resources devoted by the ESX scheduler to virtual machines and resource pools
average.MHz.capacity.usage 3 CPU usage in MHz during the interval
average.MHz.capacity.demand 2 The amount of CPU resources VMs on this host would use if there were no CPU contention or CPU limit
average.percent.capacity.contention 2 Percent of time the VMs on this host are unable to run because they are contending for access to the physical CPU(s)
average.number.corecount.provisioned 2 The number of physical cores provisioned to the entity
average.number.corecount.usage 2 The number of virtual processors running on the host
average.percent.corecount.contention 1 Time the VM is ready to run, but is unable to run due to co-scheduling constraints
average.MHz.capacity.demand 2 The amount of CPU resources VMs on this host would use if there were no CPU contention or CPU limit
average.percent.latency 2 Percent of time the VM is unable to run because it is contending for access to the physical CPU(s)
latest.MHz.entitlement 2 CPU resources devoted by the ESX scheduler
average.MHz.demand 2 The amount of CPU resources a VM would use if there were no CPU contention or CPU limit
summation.millisecond.costop 2 Time the VM is ready to run, but is unable to due to co-scheduling constraints
summation.millisecond.maxlimited 2 Time the VM is ready to run, but is not run due to maxing out its CPU limit setting
summation.millisecond.overlap 3 Time the VM was interrupted to perform system services on behalf of that VM or other VMs
summation.millisecond.run 2 Time the VM is scheduled to run
datastore
latest.millisecond.maxTotalLatency 3 Highest latency value across all datastores used by the host
average.KBps.throughput.usage 2 usage
average.millisecond.throughput.contention 2 contention
summation.number.busResets 2 busResets
summation.number.commandsAborted 2 commandsAborted
summation.number.commandsAborted 2 commandsAborted
summation.number.busResets 2 busResets
latest.number.datastoreReadBytes 2 Storage DRS datastore bytes read
latest.number.datastoreWriteBytes 2 Storage DRS datastore bytes written
latest.number.datastoreReadIops 1 Storage DRS datastore read I/O rate
latest.number.datastoreWriteIops 1 Storage DRS datastore write I/O rate
latest.number.datastoreReadOIO 1 Storage DRS datastore outstanding read requests
latest.number.datastoreWriteOIO 1 Storage DRS datastore outstanding write requests
latest.number.datastoreNormalReadLatency 2 Storage DRS datastore normalized read latency
latest.number.datastoreNormalWriteLatency 2 Storage DRS datastore normalized write latency
latest.number.datastoreReadLoadMetric 4 Storage DRS datastore metric for read workload model
latest.number.datastoreWriteLoadMetric 4 Storage DRS datastore metric for write workload model
latest.number.datastoreMaxQueueDepth 1 Storage I/O Control datastore maximum queue depth
disk
average.KBps.throughput.usage 3 Aggregated disk I/O rate
average.millisecond.throughput.contention 3 Average amount of time for an I/O operation to complete
summation.number.scsiReservationConflicts 2 Number of SCSI reservation conflicts for the LUN during the collection interval
average.percent.scsiReservationCnflctsPct 2 Number of SCSI reservation conflicts for the LUN as a percent of total commands during the collection interval
average.kiloBytes.capacity.provisioned 3 provisioned
average.kiloBytes.capacity.usage 2 usage
average.percent.capacity.contention 1 contention
hbr
average.number.hbrNumVms 4 Current Number of Replicated VMs
average.KBps.hbrNetRx 4 Average amount of data received per second
average.KBps.hbrNetTx 4 Average amount of data transmitted per second
managementAgent
average.MHz.cpuUsage 3 Amount of Service Console CPU usage
mem
average.kiloBytes.capacity.provisioned 3 Total amount of memory configured for the VM
average.kiloBytes.capacity.entitlement 1 Amount of host physical memory the VM is entitled to, as determined by the ESX scheduler
average.kiloBytes.capacity.usable 2 Amount of physical memory available for use by virtual machines on this host
average.kiloBytes.capacity.usage 1 Amount of physical memory actively used
average.percent.capacity.contention 2 Percentage of time the VM is waiting to access swapped, compressed, or ballooned memory
average.kiloBytes.capacity.usage.vm 2 vm
average.kiloBytes.capacity.usage.vmOvrhd 2 vmOvrhd
average.kiloBytes.capacity.usage.vmkOvrhd 2 vmkOvrhd
average.kiloBytes.capacity.usage.userworld 2 userworld
average.kiloBytes.reservedCapacity.vm 2 vm
average.kiloBytes.reservedCapacity.vmOvhd 2 vmOvhd
average.kiloBytes.reservedCapacity.vmkOvrhd 2 vmkOvrhd
average.kiloBytes.reservedCapacity.userworld 2 userworld
average.percent.reservedCapacityPct 3 Percent of memory that has been reserved either through VMkernel use, by userworlds, or due to VM memory reservations
average.kiloBytes.consumed.vms 2 Amount of physical memory consumed by VMs on this host
average.kiloBytes.consumed.userworlds 2 Amount of physical memory consumed by userworlds on this host
average.percent.latency 2 Percentage of time the VM is waiting to access swapped or compressed memory
average.kiloBytes.entitlement 2 Amount of host physical memory the VM is entitled to, as determined by the ESX scheduler
average.kiloBytes.lowfreethreshold 2 Threshold of free host physical memory below which ESX will begin reclaiming memory from VMs through ballooning and swapping
none.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
average.KBps.llSwapInRate 2 Rate at which memory is being swapped from host cache into active memory
average.KBps.llSwapOutRate 2 Rate at which memory is being swapped from active memory to host cache
average.kiloBytes.overheadTouched 4 Actively touched overhead memory (KB) reserved for use as the virtualization overhead for the VM
average.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
maximum.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
minimum.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
none.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
average.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
maximum.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
minimum.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
none.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
average.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
maximum.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
minimum.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
net
average.KBps.throughput.provisioned 2 Provisioned pNic I/O Throughput
average.KBps.throughput.usable 2 Usable pNic I/O Throughput
average.KBps.throughput.usage 3 Average vNic I/O rate
summation.number.throughput.contention 2 Count of vNic packet drops
average.number.throughput.packetsPerSec 2 Average rate of packets received and transmitted per second
average.KBps.throughput.usage.vm 3 Average pNic I/O rate for VMs
average.KBps.throughput.usage.nfs 3 Average pNic I/O rate for NFS
average.KBps.throughput.usage.vmotion 3 Average pNic I/O rate for vMotion
average.KBps.throughput.usage.ft 3 Average pNic I/O rate for FT
average.KBps.throughput.usage.iscsi 3 Average pNic I/O rate for iSCSI
average.KBps.throughput.usage.hbr 3 Average pNic I/O rate for HBR
average.KBps.bytesRx 2 Average amount of data received per second
average.KBps.bytesTx 2 Average amount of data transmitted per second
summation.number.broadcastRx 2 Number of broadcast packets received during the sampling interval
summation.number.broadcastTx 2 Number of broadcast packets transmitted during the sampling interval
summation.number.multicastRx 2 Number of multicast packets received during the sampling interval
summation.number.multicastTx 2 Number of multicast packets transmitted during the sampling interval
summation.number.errorsRx 2 Number of packets with errors received during the sampling interval
summation.number.errorsTx 2 Number of packets with errors transmitted during the sampling interval
summation.number.unknownProtos 2 Number of frames with unknown protocol received during the sampling interval
power
summation.joule.energy 3 Total energy used since last stats reset
average.percent.capacity.usagePct 3 Current power usage as a percentage of maximum allowed power
average.watt.capacity.usable 2 Current maximum allowed power usage
average.watt.capacity.usage 2 Current power usage
storageAdapter
latest.millisecond.maxTotalLatency 3 Highest latency value across all storage adapters used by the host
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
average.percent.OIOsPct 3 The percent of I/Os that have been issued but have not yet completed
average.number.outstandingIOs 2 The number of I/Os that have been issued but have not yet completed
average.number.queued 2 The current number of I/Os that are waiting to be issued
average.number.queueDepth 2 The maximum number of I/Os that can be outstanding at a given time
average.millisecond.queueLatency 2 Average amount of time spent in the VMkernel queue, per SCSI command, during the collection interval
average.KBps.throughput.usage 4 The storage adapter's I/O rate
storagePath
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
latest.millisecond.maxTotalLatency 3 Highest latency value across all storage paths used by the host
summation.number.busResets 2 Number of SCSI-bus reset commands issued during the collection interval
summation.number.commandsAborted 2 Number of SCSI commands aborted during the collection interval
average.KBps.throughput.usage 2 Storage path I/O rate
sys
latest.second.osUptime 4 Total time elapsed, in seconds, since last operating system boot-up
vcResources
average.kiloBytes.buffersz 4 buffersz
average.kiloBytes.cachesz 4 cachesz
average.number.diskreadsectorrate 4 diskreadsectorrate
average.number.diskwritesectorrate 4 diskwritesectorrate
virtualDisk
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
average.KBps.throughput.usage 2 Virtual disk I/O rate
summation.number.commandsAborted 2 commandsAborted
summation.number.busResets 2 busResets
latest.number.readOIO 2 Average number of outstanding read requests to the virtual disk during the collection interval
latest.number.writeOIO 2 Average number of outstanding write requests to the virtual disk during the collection interval
latest.number.readLoadMetric 2 Storage DRS virtual disk metric for the read workload model
latest.number.writeLoadMetric 2 Storage DRS virtual disk metric for the write workload model
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: api, esxi5, performance, vSphere 5

Where are the "Power" Perf Metrics in the vSphere API?

10/26/2010 by William Lam Leave a Comment

A recent question was posed on the VMTN developer forum on how to obtain the new power utilization metrics using the vSphere API. This new performance metric was introduced with the release of vSphere 4.x and can be seen using either esxtop or resxtop and specifying the "p" option for power if you are on an ESX or ESXi host.

You can also get these counters by using the vSphere Client and using the Advanced Charts:

This actually seemed like a simple enough question, pointing the user over to the vSphere API reference documentation under the perfManager. Though after taking a second look, it appears that no such metric exists in the documentation from VMware:

After a few minutes of digging around, I found that Power metrics actually do in fact exists but were not properly documented when they were first introduced. I wrote a quick vSphere SDK for Perl script called perfQuery.pl looking for metrics that were related to "power" and I identified the following:

As you can see these match up to those seen using the vSphere Client and I output the metrics using its rollup type, units, internal name and metric description. While writing this script, I also noticed there were two other performance metric types that existed and were not documented by VMware. Here is a mapping of the API performance metric keys to vSphere API perfManager, the last two including power metric types are undocumented by VMware:

vSphere Client Chart Option vSphere API Perf Metric Key Documented
Cluster Services clusterServices yes
CPU cpu yes
Management Agent managementAgent yes
Memory mem yes
Network net yes
Resource Scheduler rescpu yes
Storage Capacity disk yes
Datastore datastore yes
Disk disk yes
Virtual Disk virtualDisk yes
Storage Adapter storageAdapter yes
Storage Path storagePath yes
System sys yes
Virtual Machine Operations vmop yes
Power power no
vCenter Resources vcResources no
vCenter Debug Info vcDebugInfo no

Using the script and the performance metric key, you can actually query either all or a specific metric type that you are interested in. This is helpful, for those metrics that have not been publicly documented by VMware. However, the power metric should have been documented and I believe this to be a documentation bug that was missed by VMware.

Download: perfQuery.pl

If you are interested in learning more about the vSphere statistics and performance monitoring, I highly recommend checking out Luc Dekens three part series (Part1, Part2 and Part3) on vSphere performance monitoring. Even though his posts are specific to PowerCLI, all the concepts discussed apply to all the vSphere SDKs when dealing with performance monitoring using the vSphere APIs.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: performance, vsphere sdk for perl, vstorage api

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy