• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

storage drs

Configuring per-VMDK IOPS reservations in vSphere 6.0

05/20/2015 by William Lam 1 Comment

One of the new features in vSphere 6.0 that was quickly mentioned at the end of Duncan Epping's What's New Storage DRS blog post is the ability to configure an IOPS reservation on a per-VMDK basis which is now integrated with both Storage IO Control (SIOC) and Storage DRS. As Duncan mentioned at the end of his article, this feature is only consumable through the vSphere API and given that, it may not be a feature that is widely known or used. This topic had recently surfaced in an internal thread on how to set the IOPS reservations and below are the details if you wish to leverage this new vSphere 6.0 storage platform capability.

To be able to use this new feature, there are two requirements:

  1. You need to set the IOPS reservation value on an individual VMDK which is under the StorageIOAllocationInfo property of a VMDK called, not surprisingly, reservation.
  2. All ESXi hosts mounting the Datastore must be running vSphere 6.0

To be clear, this reservation property has been around since vSphere 5.5, but had only had support for local datastores. In vSphere 6.0, shared datastores are now supported along with both SIOC and Storage DRS being aware.

To exercise the use of this vSphere API, I have created a simple PowerCLI script called configurePerVMDKIOPS.ps1 (works with both vSphere 5.x & 6.0) which you will need to edit to include your vCenter Server, the name of the VM you wish to set the IOPS reservation along with the VMDK label and IOPS value.

Here is an example output for configuring a VM named Photon with IOPS reservations of 2000 on Hard Disk 1:

configure-per-vmdk-iops-reservations
I have been told that in the future, the plan is to make this configurable available in the vSphere Web Client. Though, honestly why would anyone want to perform this change across multiple VMs by hand, when you can quickly and efficiently automate this across your environment with a simple script? 😉

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 6.0 Tagged With: iops reservation, PowerCLI, sioc, storage drs, storage io control, StorageIOAllocationInfo, vSphere 6.0, vSphere API

Automatically Remediating SvMotion / VDS Issue Using vCenter Alarms

04/20/2012 by William Lam 8 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

In my previous article Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue, I provided a script for users to easily identify the impacted VMs as well as a way to remediate them. Though the the issue was only temporarily fixed, as any of the remediated VMs can be re-impacted if they are Storage vMotion again (manually or automatically) by Storage DRS. This meant that users would to re-run these scripts every so often to ensure their environment is not affected by this problem.

I decided to look into a more automated and hands-off approach in which a Storage vMotion of a VM will automatically trigger the execution of the remedation script. I was able to successfully accomplish this by leveraging vCenter Alarms and running a script on the vCenter Server (Here's a cool thing I did with alarms awhile back) .

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

You can create the alarm at any level of the inventory hierarchy, but I would recommend placing it at least at the datacenter or cluster level. The alarm type will be for a VirtualMachine and it we use "monitor for specific events". For the trigger, we will need to use "VM migrated" and set the status to "Unset" which will not create an alarm icon when it is triggered.

You might wonder why we selected "VM migrated" versus "VM relocated" and this is actually due to the fact that a Storage vMotion starts out just like a vMotion and if you manually perform a vMotion or Storage vMotion, only this event type will be triggered. Due to this single event being triggered by two completely different operations, it has an interesting impact which we will discuss in a bit.

Next we need to create an action for this alarm which will be running a command, you will need to specify the full path to perl.exe (assuming you're using my script which is based on vSphere SDK for Perl and you will need to have vCLI installed on the vCenter Server) as well as the path to the alarm script which in this example is called alarm.pl. Also ensure you set the green->yellow action to execute once.

You will need to create the alarm.pl script on your vCenter Server and here is what it looks like:

Perl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#!/usr/bin/perl -w
# William Lam
# http://www.virtuallyghetto.com/
 
use strict;
use warnings;
 
my $scriptlocation = "C:\\querySvMotionVDSIssue.pl";
my $server = "localhost"
my $username = "VC-USERNAME";
my $password = "VC-PASSWORD";
my $debug = 0;
 
###########################
# DO NOT MODIFY PAST HERE #
###########################
 
my $start1 = "from";
my $start2 = "to";
my $end = ",";
 
# extract VMware env variables from alarm
my $eventstring = $ENV{'VMWARE_ALARM_EVENTDESCRIPTION'};
my $vmname = $ENV{'VMWARE_ALARM_EVENT_VM'};
 
my @sourcehost = $eventstring =~ /$start1 (.*?)$end/;
my @destinationhost = $eventstring =~ /$start2 (.*?)$end/;
 
 
# Output environmental variables to see what's up
if($debug) {
open(FILE,">C:\\output.txt");
foreach my $key (keys %ENV) {
  print FILE $key . "=" . $ENV{$key} . "\n";
}
close(FILE);
}
 
# if the source/destination host is the same, means we had a Storage vMotion instead of vMotion
# and we execute the remediation script on the VM
if($sourcehost[0] eq $destinationhost[0]) {
`"$scriptlocation --server $server --username $username --password $password --vmname $vmname --fix true"`;
}

You will need to fill in the script location, in this example I have all scripts stored in C:\ and you will also need to populated the credentials which will be used to execute the script.

Earlier we mentioned that both a Storage vMotion and vMotion trigger the same event and because of that, we need to be able to identify when a Storage vMotion actually happens to run the script. The alarm.pl script above will be executed when the alarm is triggered and using the VMware specific environmental variables that is populated from the vCenter Alarm, we can extract from the event description to figure out whether it was a vMotion or Storage vMotion. Once we confirm it is a Storage vMotion, we then execute our remediation script which is from my previous article.

Note: Ensure you download the latest version of of the querySvMotionVDSIssue.pl from the previous article, as it has been updated to handle single remediation and targeted for this use case.

Now to verify that our alarm is functioning as expected, we can perform a manual Storage vMotion of a VM and we should see our alarm.pl execute and then after the Storage vMotion has completed, we should see some VM reconfiguration tasks which is from our remediation script.

So there you have it, you no longer have to worry about running the script every so often to ensure your VMs are not being impacted by the SvMotion / VDS problem. Again, I would like to stress though we are able to automate this remediation, this is not a real solution and VMware is actively working on a fix for this problem.

If you have any questions, feel free to leave a comment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: alarm, distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue

04/16/2012 by William Lam 24 Comments

UPDATE 07/13/2012 - vSphere 5.0 Update 1a has just been released which resolves this issue, please take a look here for more details for the patch as this script is no longer required.

Duncan Epping recently wrote an article about Clarifying the SvMotion / VDS problem in which he describes the scenario that would impact your VMs as well as a way to remediate those impacted VMs. I would recommend you go through Duncan's article before moving any further.

The challenge now, is how to easily identify all VMs that are currently impacted by this problem in your environment? The answer is of course Automation and leveraging the vSphere API! I created the following vSphere SDK for Perl script called querySvMotionVDSIssue.pl which searches for all VMs that are connected to a VDS and checks whether or not it's expected dvPortgroup file exists in the appropriate datastore. To use the script, you just need a system with the vCLI installed or you can just use the vMA appliance.

UPDATE: The script has now been updated to support remediation for VMs connected to both a VMware VDS as well as Cisco N1KV. The solution, thanks to one of our internal engineers was to "move" the VM's dvport from one to another, all while staying within the existing dvPortgroup which will also force the creation of the .dvsdb port file. Once the dvport move has successfully completed, we will move it back to it's original dvport that it initially resided on. We no longer have to rely on creating a temporally dvPortgroup and best of all, we can now remediate both VDS and N1KV. The script now combines both the "query" and "remediation" into single script. Please take a look at the examples below on usage.

Disclaimer: This script is not officially supported by VMware, please test this in a development environment before using on production systems.

Here is a sample output of the script running in "query" mode:

Only impacted VMs will be listed in the output. To remediate, I have combined the remediation script into the query script, if you wish to remediate ALL VMs that were listed as being impacted, you can specify the --fix flag and providing the option "true". This will go ahead and remediate all impacted VMs that were listed as before.

Here is a sample output of the script running in "remediation" mode:

In the screenshot above, you may noticed a few interesting details with VM3 and VM4. If you run out of dvports in a dvPortgroup, the script will automatically increase the number of ports to satisfy the swap (max of 10 due to number of ethernet interfaces a VM can have). Once the VM has been remediated, the dvportgroup will be reconfigured to it's original configured number of ports as shown with VM3.

If you have an impacted VM that is connected to an ephemeral dvportgroup, we will not be able to remediate due to the nature of how an ephemeral binding works. You will get a message on the specific interface and you will need to manually remediate using the steps outlined by Duncan or using the "old" remediation script which will create a temporally dvPortgroup (again, this will only work for VMware VDS' only).

If you run into any issues or have questions, feel free to leave a comment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: distributed virtual switch, dvportgroup, dvs, storage drs, svmotion, vds

Automating Storage DRS & Datastore Cluster Management in vSphere 5

07/27/2011 by William Lam 1 Comment

Storage DRS is probably one of, if not the coolest feature in vSphere 5. Storage DRS allows you to cluster your datastores into what is known as a datastore cluster (storage pod) and automatically balances both your storage I/O and capacity just like DRS does with your compute. The user interface is extremely easy to use but as always, if you need to click through several screens to get to the outcome, some automation can never hurt 🙂

I decided to create a vSphere SDK for Perl script called datastoreClusterManagement.pl which allows you to automate all aspects of creating and managing your storage pod/cluster. You will need a system that has the vCLI installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script supports 8 different types of operations and are describe below:

Operation Description
List List all available datastore clusters
Query  Query details for a specific datastore cluster
Create  Create a datastore cluster
Delete  Delete a datastore cluster (Datastores are left intact)
Add Datastore  Add datastore(s) to an existing datastore cluster
Remove Datastore  Remove datastore(s) from an existing datastore cluster
Enter Maintenance Mode  Put a datastore into maintenance mode
Exit Maintenance Mode  Take a datastore out of maintenance mode
Here is an example of performing the "list" operation: 

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation list

Here is an example of performing the "query" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation query --pod homer-NFS-pod

Here is an example of performing the "create" operation w/single datastore:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore himalaya-NFS-moe-primp-1

Here is an example of performing the "create" operation w/datastore file:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore_file dsfile

Here is an example of performing the "delete" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation delete --pod moe-NFS-pod

Here is an example of performing the "add_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation add_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-2

Here is an example of performing the "remove_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation remove_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-1

Note: Both "add_datastore" and "remove_datastore" operation support single datastore and/or datastore file

Here is an example of performing the "ent_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation ent_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

Here is an example of performing the "ext_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation exi_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

There is also complete perl docs for this script which can be called using the following command:

perldoc datastoreClusterManagement.pl

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere Tagged With: esxi5, SDRS, storage drs, storagePod, vSphere 5

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy