• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN
You are here: Home / ESXi / Changing the default size of the ESX-OSData volume in ESXi 7.0

Changing the default size of the ESX-OSData volume in ESXi 7.0

05/02/2020 by William Lam 10 Comments

In ESXi 7.0, a new partition scheme was introduced which also brings along a new set of storage requirements. These changes are explained in the official documentation here and the following VMware KB 77009 also contains some additional info which can be helpful. Storage changes are not easy but this was necessary to not only better support some of the current capabilities but more importantly, it setups the foundation for future ESXi capabilities.

The biggest change to the partition layout is the consolidation of VMware Tools Locker, Core Dump and Scratch partitions into a new ESX-OSData volume (based on VMFS-L). This new volume can vary in size (up to 138GB) depending on a number of factors including the current ESXi boot media (USB SD-Card, Local Disk) but also the size of the device itself, which is explained in the official documentation.

From some of the comments on Twitter, Reddit and the direct inquiries that I have received, this new behavior seems to be most impactful to smaller homelabs where a fresh install of ESXi 7.0 has been performed. Folks have shared that their ESX-OSData volume has taken up 120GB which can be quite significant if you have a smaller disk which can be quite common. I normally install ESXi on a USB device and I also use vSAN, which has a different behavior and I have also not upgraded my physical ESXi host (E200-8D) to 7.0 yet.

I performed a fresh installation of ESXi 7.0 (running as Nested ESXi VM) that was configured with 1TB of storage and here is what the filesystem layout now looks:


We can see that the ESX-OSData volume takes up ~119.75GB, which is not too bad for 1TB volume but I can understand this may not be ideal if you have something smaller such as 250GB to 512GB disk. Due to the size of the local device, the boot options mentioned in the KB would not be helpful and I was curious myself if this ESX-OSData volume size could be configurable. In doing some research it looks like the size of the ESX-OSData can be specified using the following ESXi boot option (SHIFT+O during the initial boot) called autoPartitionOSDataSize

UPDATE (12/17/20) - Official support for specifying the size of ESX-OSData has been added to the release of ESXi 7.0 Update 1c with a new ESXi kernel boot option called systemMediaSize which takes one of four values:

  • small = 33GB
  • medium = 69GB
  • default = 138GB (default behavior)
  • max = Consumes all available space

If you do not require or have 138GB for the ESX-OSData, you can override the default behavior by appending this option with the specified value (e.g. systemMediaSize=small). It is worth noting that by using this setting, the smallest ESX-OSData volume you can configure is 33GB. For homelabs or environment which require less than this, you would have to use the unsupported autoPartitionOSDataSize parameter , which is not officially supported as mentioned below.

Disclaimer: This may not be officially supported by VMware as it deviates from the system defaults and can have other unintended behaviors. Use at your own risk.

I performed another fresh installation of ESXi 7.0 but now passing in autoPartitionOSDataSize=4096 (MB) we can now see our ESX OS-Data volume is no longer using 120GB like before.


You should ensure that size your ESX-OSData greater than 4GB, to ensure that coredump files can be created as you can see my example, 50% has already been used up. Since this new volume will store some pretty important files, you should really give yourself a buffer if you decided to deviate from the system defaults that has been selected.

Interestingly, I also ran another experiment where I had upgraded from ESXi 6.7 Update 3 to ESXi 7.0 and here is the partition layout before the upgrade


and here is the partition layout after the upgrade and for the exact same 1TB disk which was completely empty. I suspect the size of ESX-OSData was due to my selection of preserving the existing VMFS volume.

More from my site

  • How to patch Intel NUC 10 with latest ESXi 7.0 update?
  • Heads Up – Nested ESXi crashes in ESXi 7.0 running on older CPUs
  • Really cool updates with OVFTool 4.4 and support for vSphere 7
  • Homelab considerations for vSphere 7
  • Is vSphere with Kubernetes available for evaluation? 
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, vSphere 7.0 Tagged With: ESX-OSData, ESXi 7.0, vSphere 7

Reader Interactions

Comments

  1. nosbigys says

    05/02/2020 at 2:39 pm

    William – thanks for posting this. When you say “performed fresh install using autoPartitionOSDataSize” – did you set this value as part of the “install” directive line in the kickstart, or as a separate standalone config option?

    Reply
    • William Lam says

      05/02/2020 at 2:47 pm

      As mentioned in the blog post, this is an ESXi boot option. You can either append it interactively (SHIFT+O) when the installer boots up OR if you’re doing it via Kickstart, append it on kernelopt line just like you would for any other ESXi boot option

      Reply
  2. kios says

    05/02/2020 at 5:30 pm

    William. A explation about new storage requerimientos when installing in usb/SD would be great. There are only a few info about running only with usb and a deprecated mode and that you should redirect scratch like before but I cannot find a explation about the deprecated mode and what is the impact.

    Reply
  3. André says

    05/03/2020 at 3:26 am

    Thanks a lot for clarifing things. Yet another reason why it proves valuable to follow your blog!
    With the minimum recommended size of 32GB for HDD installations, and taking the two 4GB bootbank partitions into account, “autoPartitionOSDataSize=24576” (24GB) should be a good compromise for physical test/lab environments with limited disk sapce.

    Reply
  4. Maher AlAsfar says

    05/04/2020 at 5:01 am

    Hi William . I noticed when using your vSphere 7 ova nested ESXi template that i get a warning about the core dump missing and needs to be configured. And i have not found a way yet. Is this related to this blog.

    Reply
    • William Lam says

      05/04/2020 at 10:50 am

      Not directly related to this blog post but its a new warning message in case you don’t have a coredump configured. See https://www.virtuallyghetto.com/2020/05/quick-tip-suppress-new-core-dump-warning-in-esxi-7-0.html on how to toggle it off, I don’t in case folks want to set one up afterwards.

      Reply
  5. Jesse says

    05/07/2020 at 3:55 pm

    Just an added note, this boot parameter does *not* work if you’re installing to a USB device.

    Reply
  6. bravo says

    07/01/2020 at 10:07 pm

    Hi, William
    Thanks for your information. I tried to install ESXi 7.0 on local SSD drive and the capacity size is around 128GB. After finishing the setup, then I don’t see that the local drive is in datastore. The reason why I don’t see is that the VMimage or data is stored and it can’t be used?

    Reply
  7. Albert says

    09/05/2020 at 4:04 am

    Hi willian, can you do a new fresh install esxi 7.0 to a 128GB or 256GB sd card or usb without needing additional local disk or ramdisk? I read storage requeriments article but is confusing as it says for sd or usb you always need an additional local disk or it will run in deprecated mode.

    Reply
  8. Erik Bakker says

    01/17/2021 at 5:48 am

    Hi William,

    Yesterday I decided to reinstall my homelab with version 7.0.1c and i wanted to re-use (as i did before) the autoPartitionOSDataSize parameter.

    Turns out that in the latest version this parameters is not only unsupported but it doesnt work anymore. No problem here so i decided to use the systemMediasize parameter as you mentioned. (this worked!)

    Turns out that the information on this blog is not exactly accurate.

    You say is ‘small,medium, default and max but actually its min,small and max

    See: https://kb.vmware.com/s/article/81166

    The size on mentioned in the Vmware article is incorrect. Min means 25gb and not 33gb, small was 55gb and not 69gb. (i tried it in my lab)

    Hope this helps someone

    Regards

    Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy