One thing you might notice after deploying the new VCSA 6.0 is that it now includes 11 VMDKs. If you are like me, you are probably asking why are there so many? If you look at past releases of the VCSA, it only contained two VMDKS. The first disk was used for both the OS and the various VMware applications like vCenter Server, vSphere Web Client, etc. and the second disk was where all the application data was stored such as the VCDB, SSODB, Logs, etc.

There were several challenges with this design, one issue was that you could not easily increase the disk capacity for a particular application component. If you needed more storage for the VCDB but not for your logs or other applications, you had no choice but to increase the entire volume. In fact, this was actually a pretty painful process because a logical volume manager (LVM) was also not used. This meant that you needed to stop the vCenter Server service, add a new disk, format it and then copy all the data from the old volume to the new. Another problem with the old design is that you can not apply Storage QoS on important data such as the VCDB which you may want on a faster tier of storage or putting your Log data on slower and cheaper tier of storage by leveraging something like VM Storage Policies which works on a per VMDK basis.

For these reasons, VCSA 6.0 is now comprised of 11 individual VMDKs as seen in the screenshot below.

11-vmdks-vcsa-6.0-0
Here is a useful table that I have created which provides the mappings of each of the VDMKs to their respective functions.

Disk Size Purpose Mount Point
VMDK1 12GB / and Boot / and /boot
VMDK2 1.2GB Temp Mount /tmp/mount
VMDK3 25GB Swap SWAP
VMDK4 25GB Core /storage/core
VMDK5 10GB Log /storage/log
VMDK6 10GB DB /storage/db
VMDK7 5GB DBLog /storage/dblog
VMDK8 10GB SEAT (Stats Events and Tasks) /storage/seat
VMDK9 1GB NetDumper /storage/netdump
VMDK10 10GB AutoDeploy /storage/autodeploy
VMDK11 5GB Inventory Service /storage/invsvc

In addition, increasing disk capacity for a particular VMDK has been greatly simplified as the VCSA 6.0 now uses LVM to manage each of the partitions. You can now, on the fly increase disk space for a particular volume while the vCenter Server is still running and the changes will go live immediately. You can refer to this article here for the process as it is a simple two step process.

Here are some useful commands to get more details of the filesystem structure in the new VCSA.

lsblk

11-vmdks-vcsa-6.0-2

lsscsi

11-vmdks-vcsa-6.0-3

10 thoughts on “Multiple VMDKs in VCSA 6.0?

  1. Looks nice! How is the recommended upgrade path from 5.5 to 6.0 with the VCSA? Is it at all possible with regards to the new disk layout?

  2. It’s about time that this sort of thinking arrived into VMware. I wish it would break out at most of the companies I have worked for. Not only the benefits of expanding disks, but increased performance, better insight into what I/o is being generated by what, and I’m sure readers here can think of many others. Coming from an HPC background, separation of app components to dedicated drives is a must for low latency and velocitized output. ( I know it’s not a word, yet ).

  3. For experimental purposes, it’s worth pointing out that it’s easy to reconfigure VCSA 6.0 to use fewer virtual disks, and in the most obvious possible way: drop it down to single user mode, create your desired target disk layout, copy everything over using cp (or cpio, rsync, or whatever, so long as your choice of tool can be made to preserve symlinks, file ownership, and permission bits), then adjust /etc/fstab and other Linux boot files as appropriate for your new configuration, using (freely available) SLES reference material where necessary. Given a sufficiently similar target configuration (and a boot disk), I’m sure you could use filesystem cloning in place of my file-by-file copies, but given my limited (non-)application, consolidating everything back down to a single filesystem seemed simpler.

    The (non-)application in question was curiosity: I wanted to see if the appliance would run under KVM on a little QNAP NAS box (dual-core Bay Trail Atom CPU, 8GB RAM w/6.5 available to the guest, virtual disks on a USB 3.0-attached SATA SSD) whose Web virtualization UI didn’t support adding more than four virtual disks. Which it did, and with full out-of-box support for paravirtualized storage and networking, to boot. Performance in my tiny lab environment (3 hosts, a couple dozen VMs) was surprisingly tolerable, though noticeably worse than on, say, a 2011 Mac mini with the recommended amount of RAM.

  4. If I wanted to enable VFRC on my vCenter, which VMDK’s would benefit the most?

    Thanks!

Thanks for the comment!