Just wanted to give folks a heads up on an issue that a colleague of mines recently identified when provisioning Virtual Appliances (OVF/OVA) onto a VSAN datastore when using the vSphere Web Client. He found that regardless of the VSAN Storage Policy that was selected, whether it is the default VSAN Storage Policy or a custom one, the Virtual Appliance will always be Thick provisioned.

This behavior only occurs when using the vSphere Web Client and is not observed when using either the vSphere C# Client or the ovftool CLI. My understanding of the issue is that there are two ways in which a VM can get provisioned as Thin. The "old" method which was to explicitly specify the disk allocation type (Thin vs Thick) and the "new" method which uses VM Storage Policies. To ensure that we maintain backwards compatibility for older clients, if a client specifies Thick provisioned, it would actually override the VM Storage Policy even if the Object Space Reservation capability was set to 0 (Thin provisioned). Since you can no longer specify the disk allocation type in the vSphere Web Client, the default behavior is to not Thin provision and hence the current Thick provisioning result even though the default VSAN Storage Policy has OSR set to 0.

Note: When referring to Thick provisioned in VSAN (proportionalCapacity = 100), It is defined as provisioned Thin with a reservation so there is a guarantee that space is available for the object. It is not accurate to compare this to Zeroed Thick or Eager Zeroed Thick in the VMFS/NFS world as VSAN is an Object Store.

Engineering has already been engaged and is currently investigating the issue. We have also asked for a VMware KB to be published, so hopefully once that goes up, folks can subscribe to that for more details and updates.

In the meantime, since it is actually pretty difficult to see if you have been affected by issue, I have created a simple PowerCLI script called Get-VSANPolicy.ps1 which will allow you to quickly scan through your VM(s) to identify whether you have any VMs that have been Thick provision residing on a VSAN Datastore. You can either get all VMs by piping Get-VM * or a specific set of VMs into the script.

The following example retrieves all VMs that start with "Photon-Deployed-From-*" and extracts their current VSAN VM Storage Policy for both VM Home and individual VMDKs. Here, we can see that both VMs are using the default VSAN VM Storage Policy.

Get-VM "Photon-Deployed-From-*" | Get-VSANPolicy -datastore "vsanDatastore"

Lets now only search for VMs that have been Thick provisioned by using the -thick option and setting that to true. Here we can see that the OVF we provisioned through the vSphere Web Client is the only VM listed.

Get-VM "Photon-Deployed-From-*" | Get-VSANPolicy -datastore "vsanDatastore" -thick $true

If we want to get more details on the underlying VM Storage Policy that was applied, we can also specify the -details option to true. Here we can clearly see that the 2nd VM has proportionalCapacity=100 which means Thick provision.

Get-VM "Photon-Deployed-From-*" | Get-VSANPolicy -datastore "vsanDatastore" -thick $true -details $true

Luckily, the fix is quite easy thanks to Paudie O'Riordan who found out that it was as simple as just re-applying the VSAN VM Storage Policy! (Policy Based Management FTW!) This means there is no need to perform unnecessary Storage vMotions to be able to convert the VM from Thick to Thin, it is literally a couple of clicks in the UI.

UPDATE (07/15/16) - Thanks to reader Jose, it looks like using the vSphere Web Client to re-apply the VSAN VM Storage Policy will correctly apply the policy to the VM/VMDKs, but does not reclaim the underlying storage. It is recommended that you use the PowerCLI script below to re-apply the policy which will then properly reclaim the underlying storage and will properly reflect the storage utilization.

As with anything, I still prefer Automation and with that, I have created a secondary script to help with the remediation. This is also a PowerCLI script called Set-VSANPolicy.ps1 which accepts a list of VMs and the name of the VSAN VM Storage Policy that you wish to re-apply.

Here is an example of running the script and remediating two VMs that contains multiple VMDKs:

Set-VSANPolicy -listofvms $listofvms -policy $vsanpolicy

If you now re-run the first script, you should see that you no longer have VMs that are provisioned Thick anymore (this may take some time depending on the size of your VMs).

15 thoughts on “Heads Up: OVF/OVA always deployed as Thick on VSAN when using vSphere Web Client

  1. I encountered this too, but in a good way for me! I was testing the Data Domain Virtual Edition with VxRail, which uses Virtual SAN, and for performance reasons ‘thick’ disks are recommended. Deploying the OVA resulted in a Thick provisioned VM (2 disks), however when I added an additional 1TB Disk it was ‘Thin’. I created a ‘Thick’ Storage Policy and applied it to the VM, and subsequent Disks deployed for testing. https://pbradz.wordpress.com/2016/05/26/vxrail-and-data-domain-virtual-edition/

  2. Hi, I am using Power CLI 6.3 Release 1 but Get-VSANPolicy command is not available in Power-CLI, is there any extension required to be installed?

    • Ranjna,

      Get-VSANPolicy is not a native PowerCLI cmdlet, but rather a function I created. You’ll need to download the script which I have referenced in the post above to use it 🙂

  3. Hi William,

    I faced this behavior from day one with v6.1. I read your post the same day you posted it and I tried the workaround, my results are still contradictory. Using the PS script to check the Storage Policy everything seem to be alright, but at VM level the disks are still thick (when you check through “Edit Settings”). I did the assumption that at VSAN level it’s thin and doesn’t matter if the VM config shows thick. Also, I didn’t see any reduction of consumed space (it consumed the full space)

    But, I made other test. I deployed the OVF (vROps) using PowerCLI and forcing to use thin. My surprise has been the vApp is consuming the right space now. Rather than close to 300GB of used space, now it’s consuming 37GB. Even if you check the free space of the vSandatastore, I have now close to 400GB free and before using the workaround I had 100GB free.

    I got screenshots if you wanna see that I’m exposing.


    • Hi Jose,

      The vSphere Web Client will not correctly show the Disk Allocation due to this known issue, the PowerCLI script is what you should use to validate.

      • Hi William,

        I found the reason of this behavior.

        – If you reapply the Storage Policy through the Web Client and check the status with the script, the value is proportionalCapacity=0 but the vmdk space is not reclaimed (seems this method through the Web Client doesn’t work)
        – But if you use the SET script to force the application of the Storage Policy, it works properly. The proportionalCapacity=0 was already there but something happens through the SCRIPT that it’s not happening with the Web Client.

        Ultimately, after try this process with many VMs, the only one that worked properly was applying the Storage Policy with the PowerCLI script. Using the Web Client to reapply the Storage Policy never worked, it changed the value of proportionalCapacity to 0 but doesn’t release space of the datastore.


        • Hi Jose,

          Thanks for the feedback. There was a recent KB that was published on this recommendation, so I’ll share this back internally to have them recommend the PowerCLI script. I’ll also update my article with this information. Thanks again!

  4. This prompted me to do some testing around the presentation of EZT disks on vSAN (used for multi-writer functionality in Oracle RAC clusters).

    You can effectively “hide” EZT vmdks on a vSAN datastore because if they’re assigned, either purposefully or on accident, a storage policy with 0% OSR then the datastore reports usage as though the vmdks are thin. The ways I’ve currently found to view the actual storage used by the vmdk:
    — Datastore browser
    — vsan.vm_object_info in RVC
    — Fat client view of the VM

    I’d imagine there are also api calls that will surface the same info, but I’m not as well versed on that front.

    • I recall running the latest version which was PowerCLI 6.3 R1 but didn’t run into the bug you mentioned. Since then, I’ve upgraded to several other versions for beta testing, so I can’t say 100%. Let me ping Alan Renouf to see if he’s able to chime in on this

      • Did this ever get figured out? I’m still having the same issue with PowerCLI / Powershell 5.0

  5. I was totally shocked after migration my vms from an “old” SAN datastore a to a new vsan cluster with veeam migration.

    Used before: 7.38 TB
    Used After: 7.06 TB
    Savings: 322.11 GB
    Ratio: 1.05x

    After executing Set-VSANPolicy, I get the results.

    Used before: 6.45 TB
    Used After: 4.09 TB
    Savings: 2.36 TB
    Ratio: 1.58x

    Brilliant solution, thanks a lot.

    • @Mario

      Did you read the blog post? It literally offers multile solutions (VMotion, manually reapplying the storage policy via the gui, a powershell script for batch fixing multiple vms).

Thanks for the comment!