• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vmdk

Updates to VMDK partitions & disk resizing in VCSA 6.5

11/07/2016 by William Lam 9 Comments

Similiar to the vCenter Server Appliance (VCSA) 6.0 release, the new VCSA 6.5 is also composed of multiple virtual machine disks (VMDKs). Each VMDK maps to a specific function and OS partition within the VCSA. There are now a total of 12 VMDKs, two of which are new in vSphere 6.5: vSphere Update Manager (VUM) and Image Builder. The following table provides a break down of the VMDKs in VCSA 6.5 compared to VCSA 6.0:

Disk 6.0 Size 6.5 Size Purpose Mount Point
VMDK1 12GB 12GB / and Boot  / and Boot
VMDK2 1.2GB 1.8GB VCSA's RPM packages N/A as it is not mounted after install
VMDK3 25GB 25GB Swap SWAP
VMDK4 25GB 25GB Core  /storage/core
VMDK5 10GB 10GB Log  /storage/log
VMDK6 10GB 10GB DB  /storage/db
VMDK7 5GB 15GB DBLog  /storage/dblog
VMDK8 10GB 10GB SEAT (Stats Events and Tasks)  /storage/seat
VMDK9 1GB 1GB Net Dumper  /storage/netdump
VMDK10 10GB 10GB Auto Deploy  /storage/autodeploy
VMDK11 N/A (Previously InvSrvc 5GB) 10GB Image Builder /storage/imagebuilder
VMDK12 N/A 100GB Update Manager  /storage/updatemgr

In addition to the VMDK/partition changes, there are a couple of enhancements when needing to increase disk capacity in the VCSA. Just like in VCSA 6.0, you will still be able to hot-extend any one of the VMDKs while the system is still running.

  • The first change is that instead of the old vpxd_servicecfg command which is used expand the logical volume(s) making the new storage capacity available the OS/application, it has been replaced with the following command: /usr/lib/applmgmt/support/scripts/autogrow.sh 
  • The second change is that instead of having to perform the above command using only SSH which may be disabled by default. There is now a new Virtual Appliance Management Interface (VAMI) REST API that can be called remotely: POST /appliance/system/storage/resize
  • The final difference is that in previous releases, you could only resize the Embedded VCSA or External VCSA node, but not the Platform Services Controller (PSC) node. In 6.5, this has changed and you can apply this method on any one of the VCSA nodes. Thanks to Blair for reminding me on this one!

Lets walk through an example of increasing the Net Dumper partition (VMDK9) and exercising this new VAMI API.

Step 1 - Login to VCSA using SSH to run a quick "df -h" to check the current size of your Net Dumper partition which by default will be 1GB as seen in the screenshot below.

increase-disk-capacity-vcsa-6-5-0
Step 2 - Next, we will increase the VMDK to 5GB. In this example, I am using the vSphere Web Client but if you wanted to completely automate this process end-to-end, you can use the vSphere API/PowerCLI to perform this operation.

increase-disk-capacity-vcsa-6-5-1
Step 3 - To quickly try out the new VAMI API, we will use the new vSphere API Explorer that is included in the VSCA 6.5. Simply open a web browser and enter the following URL: https://[VCSA-HOSTNAME]/apiexplorer Select the "appliance" API and then click on the login button and enter your vCenter Server credentials.

increase-disk-capacity-vcsa-6-5-2
Step 4 - Scroll down to the POST /appliance/system/storage/resize operation and expand it. To call this API, just click on the "Try it out" button. If the operation completely successfully, you should see a  200 response as shown in the screenshot below.

increase-disk-capacity-vcsa-6-5-3
Step 3 and 4 can also be called directly through PowerCLI using the new CIS cmdlets (Connect-CisServer & Get-CisService) which exposes the new VAMI APIs. Below is a quick snippet that performs the exact same operation:

Connect-CisServer -Server 192.168.1.150 -User *protected email* -Password VMware1!
$diskResize = Get-CisService -Name 'com.vmware.appliance.system.storage'
$diskResize.resize()

Step 5 - Lastly, we can now log back into the VCSA and re-run the "df -h" command to verify we can see the new storage capacity.

increase-disk-capacity-vcsa-6-5-4

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VCSA, vSphere 6.5 Tagged With: autogrow.sh, PowerCLI, REST API, vami, vcenter server appliance, vcsa, VCSA 6.5, vcva, vmdk, vSphere 6.5

New method of enabling Multiwriter VMDK flag in vSphere 6.0 Update 1 (UI + API)

10/19/2015 by William Lam 20 Comments

Prior to vSphere 6.0, in order for multiple Virtual Machines to share a VMFS-backed VMDK, the Multiwriter VMDK flag must be enabled, which is accomplished by adding a specific VM Advanced Setting as shown in this VMware KB 1034165. For customers who were accustomed to this old method, you may have found that this option no longer works. This was true regardless if you had used the vSphere Web/C# Client or the vSphere API to apply the configuration.

To provide for a better user experience, this behavior was changed in vSphere 6.0 and a new API was introduced for enabling and disabling the Multiwriter VMDK flag. In vSphere 6.0, there is now a new sharing attribute on the Virtual Disk backing property which accepts one of two values: sharingMultiWriter or sharingNone for specifying the Multiwriter flag. In my opinion, this is a positive change as we too often rely on the VM Advanced Setting as a generic "catch all" for enabling or configuring various settings versus adding proper APIs to a VM.

Although there is now a proper API which will can help enable new Automation use cases, one thing that was still lacking was an easy way to enable the Multiwriter VMDK flag using the UI. In vSphere 6.0 Update 1, we have now introduced a new UI dropdown option called "sharing" in the vSphere Web Client for configuring the Multiwriter VMDK flag which can be found in the Virtual Disk section when editing a VM as shown in the screenshot below.

Screen Shot 2015-10-16 at 10.19.05 AM
Note: The new Sharing property is only available in the vSphere Web Client UI and is not available in the vSphere C# Client. If you need to configure the Multiwriter VMDK flag and do not have access to the vSphere Web Client, you can use the vSphere API to help automate this configuration change.

UPDATE (06/27/16) - Created two scripts which now cover scenarios where VM is online and/or offline.

For those interested in Automating the Multiwriter VMDK flag, I have created two PowerCLI scripts called: configureMultiwriterVMDK.ps1 (offline VM configuration) and addMultiwriterVMDK.ps1 (online VM configuration) which demonstrates this new vSphere API.

The first script configureMultiwriterVMDK.ps1 allows you enable the Multiwriter Flag for an existing VMDK that has already been added to a VM. This operation must be done while the VM is powered off and to use the script you will need to specify the name of the VM as well as the label of the VMDK in which you wish to enable the Multiwriter VMDK flag (e.g. Hard disk 2). Below is an example of running the script.

Screen Shot 2015-10-16 at 8.24.46 PM
The second script addMultiwriterVMDK.ps1 allows you to hot-add a new VMDK and enables the Multiwriter Flag to a VM. This operation is done while the VM is powered on which is a common workflow for customers needing to hot-add storage to an existing Cluster solution such as Oracle RAC for example all while the system is running. To use the script, there are a few variables you will need to edit:

  • vmName - The name of the VM you wish to perform th operation on
  • vmdkFileNamePath - This is the full datastore path to the name of the underlying VMDK. See the script for more information but the syntax will look like "[datastore-name] vm-home-dir/vmdk-name.vmdk"
  • diskSizeGB - The capacity of the VMDK to add (GB)
  • diskControllerNumber - The SCSI controller number (0-3)
  • diskUnitNumber - The Unit number (0-16)
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 6.0, vSphere Web Client Tagged With: multiwriter, vmdk, vSphere API, vsphere web client

Multiple VMDKs in VCSA 6.0?

03/09/2015 by William Lam 10 Comments

One thing you might notice after deploying the new VCSA 6.0 is that it now includes 11 VMDKs. If you are like me, you are probably asking why are there so many? If you look at past releases of the VCSA, it only contained two VMDKS. The first disk was used for both the OS and the various VMware applications like vCenter Server, vSphere Web Client, etc. and the second disk was where all the application data was stored such as the VCDB, SSODB, Logs, etc.

There were several challenges with this design, one issue was that you could not easily increase the disk capacity for a particular application component. If you needed more storage for the VCDB but not for your logs or other applications, you had no choice but to increase the entire volume. In fact, this was actually a pretty painful process because a logical volume manager (LVM) was also not used. This meant that you needed to stop the vCenter Server service, add a new disk, format it and then copy all the data from the old volume to the new. Another problem with the old design is that you can not apply Storage QoS on important data such as the VCDB which you may want on a faster tier of storage or putting your Log data on slower and cheaper tier of storage by leveraging something like VM Storage Policies which works on a per VMDK basis.

For these reasons, VCSA 6.0 is now comprised of 11 individual VMDKs as seen in the screenshot below.

11-vmdks-vcsa-6.0-0
Here is a useful table that I have created which provides the mappings of each of the VDMKs to their respective functions.

Disk Size Purpose Mount Point
VMDK1 12GB / and Boot / and /boot
VMDK2 1.2GB Temp Mount /tmp/mount
VMDK3 25GB Swap SWAP
VMDK4 25GB Core /storage/core
VMDK5 10GB Log /storage/log
VMDK6 10GB DB /storage/db
VMDK7 5GB DBLog /storage/dblog
VMDK8 10GB SEAT (Stats Events and Tasks) /storage/seat
VMDK9 1GB NetDumper /storage/netdump
VMDK10 10GB AutoDeploy /storage/autodeploy
VMDK11 5GB Inventory Service /storage/invsvc

In addition, increasing disk capacity for a particular VMDK has been greatly simplified as the VCSA 6.0 now uses LVM to manage each of the partitions. You can now, on the fly increase disk space for a particular volume while the vCenter Server is still running and the changes will go live immediately. You can refer to this article here for the process as it is a simple two step process.

Here are some useful commands to get more details of the filesystem structure in the new VCSA.

lsblk

11-vmdks-vcsa-6.0-2

lsscsi

11-vmdks-vcsa-6.0-3

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VCSA, vSphere 6.0 Tagged With: isscsi, lsblk, lvm, SEAT, vcsa, vcva, vmdk, vSphere 6.0

Emulating an SSD Virtual Disk in a VMware Environment

07/03/2013 by William Lam 28 Comments

I continue to be amazed everyday at all the awesome features and challenges being tackled by our VMware Engineering organization and yesterday was another example of that. There was a question that was posed internally about emulating an SSD device for a Nested ESXi environment running in VMware Fusion. I figure this would be an easy answer and pointed the user to a blog article I had written a few years ago on how to fake an SSD device in ESXi using SATP claim rules via ESXCLI. It turns out, one of the engineers knew of a better way of emulating an SSD Virtual Disk that can be consumed beyond just Nested ESXi VMs but also for any other guestOSes that supports SSD devices.

So why would you want to emulate an SSD device? Well for a vSphere environment, you may want to try out the new Swap to Host Cache feature from a functional perspective to see how it would work. You might be developing a script to enable this feature and having a "fake" SSD device would allow you to create such a script and test it. For other guestOSes, maybe you want to see how the system would react to an SSD device, perhaps drivers or configurations maybe needed and you would like to run through those processes before installing a real SSD device.

So the solution is actually quite simple and it is just an advanced setting in the Virtual Machine's configuration file (VMX) which can also be appended to using either the vSphere Web Client, vSphere C# Client or the vSphere API. This setting is only supported on Virtual Machines that is running virtual hardware 8 or greater. To configure a specific virtual disk to appear as an SSD, you just need to add the following:

scsiX:Y.virtualSSD = 1

where X is the controller ID and the Y is the disk ID of the Virtual Disk.

This configuration presents to the guestOS the mediumRotationRate field of the SCSI inquiry pages 0xB1 and sets the value to 1 and the guests will then report it as a solid-state device. As you can see, this can benefit more than just running Nested ESXi, you can also do various testing on other guestOSes that you require a "fake" SSD device.

Note: Though you can emulate an SSD device, it is no substitute for an actual SSD device and any development or performance tests done in a simulated environment should also be vetted n a real SSD device, especially when it comes to performance.

It is also important to note that reporting of an SSD device will highly depend on the guestOS, here is a high level table on how some of the common guestOSes recognize SSD devices.

GuestOS SSD Reporting
Windows 8 IDE, SCSI and SATA disks can be recognized as SSDs
Windows 7 IDE and SATA disks can be recognized SSD, but SCSI as mechanical
Linux (Ubuntu & RHEL) IDE, SCSI and SATA disks can be recognized as SSDs
Mac OS X SATA can be recognized as SSDs, but IDE and SCSI as mechnical

Here is a screenshot of a Nested ESXi host with an emulated SSD device:

Here is a screenshot of the new Windows 8.1 Preview with an emulated SSD device:

Note: Though I demonstrated this using vSphere, this also works for VMware Fusion (tested this personally), Workstation and Player. The only requirement is that you are running virtual hardware 8 or greater and that your guestOS supports reporting SSD device.

From a Nested ESXi perspective, I will definitely be using this method instead of using ESXCLI to go through the SATP claim rules, this is much easier to remember. I would also like to thank Regis Duchesne for sharing this tip and Srinivas Singavarapu and the virtual devices team for developing this awesome feature. You guys ROCK!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi, solid state drive, ssd, virtual disk, vmdk, vSphere

How to Create an SE Sparse (Space-Efficient) Disk in vSphere 5.1

09/05/2012 by William Lam 8 Comments

You probably may have heard, that with the upcoming release of vSphere 5.1, a new virtual machine disk format will be introduced called called SE Sparse (Space-Efficient). One of it's features is to provide the ability to reclaim unused blocks from within the guestOS. I would highly recommend you check out a recent blog post vSphere 5.1 Storage Enhancements – Part 2: SE Sparse Disks by Cormac Hogan for more details about the new SE Sparse disk format as well as other storage improvements in vSphere 5.1.

As Cormac points out, this new disk format will initially be leveraged by VMware View (in a future release from my understanding), as there are additional integrations required to use this feature than just using the new SE Sparse disk format. Having said that, the SE Sparse disk format is a feature of the vSphere 5.1 platform and with that, you do have the ability to create an SE Sparse disk.

Disclaimer: This is for educational purposes only, this is not officially supported by VMware. Please test this in a development environment before using it on actual systems.

There are two methods in which you can create an SE Sparse disk, directly on the ESXi Shell of an ESXi 5.1 host or remotely connecting to an ESXi 5.1 host.

Option 1 - Using vmkfstools on ESXi Shell 

Though it may not be documented, you can easily create a new VMDK with the new SE Sparse disk format by running the following command (10GB disk in this example):

vmkfstools -c 10g -d sesparse WindowsXP.vmdk

Here is a screenshot of new SE Sparse disk descriptor file to prove we have successfully created a new VMDK using the new format:

Option 2 - Using vSphere 5.1 API w/modified remote version of vmkfstools

As mentioned, the SE Sparse disk format is a feature of the vSphere 5.1 platform and as so, you can also leverage the vSphere 5.1 API to create a new VMDK using the virtualDiskManager and specifying the new SeSparseVirtualDiskSpec.

Note: Even though the vSphere API reference mentions the ability to set grain size via grainSizeKb property, I have found that it is not possible and just leaving it blank will automatically default to 1024K (1MB) which might be a system default for now.

You can download the modified version of the remote vmkfstools called vmkfstools-lamw which requires the the installation of vCLI 5.1 or vMA 5.1.

Here is an example of creating the same 10GB VMDK using the new SE Sparse disk format:

./vmkfstools-lamw --server 172.30.0.187 --username root -c 10G -d sesparse "[datastore1] WindowsXP.vmdk"

After you have created your new SE Sparse disk, the next logical step is assign it to a virtual machine. Since this is a new feature in vSphere 5.1, you will need to use the new vSphere Web Client to perform the operation as the legacy vSphere C# Client is not aware of this new disk type. You will also need to ensure that the virtual machine is running the latest ESXi 5.1 compatibility and later (virtual hardware version 9).

Once you have added our newly created disk from the datastore, it should now show up in the vSphere Web Client as Flex-SE for the disk type.

Additional Resources:

  • What's New In vSphere 5.1 Storage Whitepaper
  • Space-Efficient Sparse Virtual Disks and VMware View

 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: api, esxi5.1, sesparse, vmdk, vmkfstools, vSphere 5.1, vsphere sdk for perl

How to Query VM Disk Format in vSphere 5

09/25/2011 by William Lam 5 Comments

Prior to vSphere 5, it was not trivial to identify the particular disk format for a given virtual machine's disk. Using the vSphere Client, you would see a virtual machine's disk be displayed as either thin or thick. The problem with this is that the "thick" format can be either:

  • zeroedthick - A thick disk has all space allocated at creation time and the space is zeroed on demand as the space is used
  • eagerzeroedthick - An eager zeroed thick disk has all space allocated and wiped clean of any previous contents on the physical media at creation time. Such disks may take longer time during creation compared to other disk formats.

Users would not be able to distinguish the exact type using the vSphere Client or the vSphere 4 APIs. With the release of vSphere 4, VMware did introduce a new property in the vSphere 4 API called eagerlyScrub which was supposed to help identify whether a virtual disk was allocated as an eagerzeroedthick disk. Unfortunately there may have been a bug with the property as it never gets modified whether a disk is created as zeroedthick or eagerzeroedthick.

The only method that I was aware of to truly figuring out the disk format would be to manually parse the virtual machine's vmware.log file to identify the disk type which I wrote a script for in 2009.

During the vSphere 5 beta, I had noticed the vSphere Client UI now properly displays all three virtual machine disk format: zeroedthick (displayed as flat), thin and eagerzeroedthick (displayed as thick).

Seeing that VMware now displays the three different formats, I wanted to see if it was possible to extract this using the vSphere 5 APIs and not have to rely on the hack of reading the vmware.log files. It turns out that the eagerlyScrub property is now functioning properly when a VMDK is provisioned or has been inflated/converted to the eagerzeroedthick format. I wrote a simple vSphere SDK for Perl script called getVMDiskFormat.pl which allows you to extract the disk formats of all virtual machines connecting to either vCenter or directly to an ESX(i) host.

The script allows for two types of output: console (directly on the console) or csv (creates .csv file)

If you select csv output, by default it will be stored in a file called "vmDiskFormat.csv". You also have the option of specifying the filename by using the --filename flag and providing a name of your choosing.

You can then load the csv file into excel and easily sort through the various disk format types.

All this is already included in the latest version of the VMware vSphere Health Check Report 5.0 if you want a centralize report that includes virtual machine disk format.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: api, eagerzeroedthick, esxi 5, thin, vmdk, vSphere 5, vsphere sdk for perl, zeroedthick

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy