A really cool new capability that was introduced in vSphere 6.7 is the support for the extremely fast memory technology known as non-volatile memory (NVM), also known as persistent memory (PMem). Customers can now benefit from the high data transfer rate of volatile memory with the persistence and resiliency of traditional storage. As of this blog post, both Dell and HP have Persistent Memory support and you can see the list of supported devices and systems here and here.


PMem can be consumed in one of two methods:

  • Virtual PMem Disks (vPMemDisk) - In this mode, the GuestOS is NOT PMem-aware and does not have access to the physical PMem device. Instead, a new virtual PMem hard disk can be created and attached to a VM. To ensure the PMem hard disk is placed on the PMem Datastore as part of this workflow, a new PMem VM Storage Policy will be applied automatically. There are no additional GuestOS or VM Virtual Hardware requirement for this scenario, this is great for legacy OS that are not PMem-aware

Customers who may want to familiarize themselves with these new PMem workflows, especially for Automation or educational purposes, could definitely benefit from the ability to simulate PMem in their vSphere environment prior to obtaining a physical PMem device. Fortunately, this is something you can actually do if you have some additional spare memory from your physical ESXi host.

Disclaimer: This is not officially supported by VMware. Unlike a real physical PMem device where your data will be persisted upon a reboot, the simulated method will NOT persist your data. Please use this at your own risk and do not place important or critical VMs using this method.

In ESXi 6.7, there is an advanced boot option which enables you to simulate or "fake" PMem by consuming a percentage of your physical ESXi hosts memory and allocating that to form a PMem datastore. You can append this boot option during the ESXi boot up process (e.g. Control+O) or you can easily manage it using ESXCLI which is my preferred method of choice.

Run the following command and replace the value with the desired percentage for PMem allocation:

esxcli system settings kernel set -s fakePmemPct -v 25

Note: To disable fake PMem, simply set the value to 0

You can also verify whether fake PMem is enabled or its current configured value is by running the following command:

esxcli system settings kernel list -o fakePmemPct

For the changes to go into affect, you obviously will need to reboot your ESXi host.

Once the ESXi host has been rebooted, you can confirm the changes were applied by directly logging into the Embedded ESXi Host Client (https://[ESX-IP]/ui) of your ESXi host and you should now see that a new PMem datastore has been automatically created as shown in the screenshot below. You now have a PMem datastore that has been constructed using a portion of your physical ESXi host memory, how cool is that!? In case it was not obvious, do not place important or critical VMs that you wish to persist upon a reboot. This should only be used for educational or testing purposes, you have been WARNED AGAIN.


vPMem Workflow

Create a new or existing vHW 14 VM, you should now be able to add a new NVDIMM device using either the vSphere Web/H5 Client.

As part of the vPMem workflow, a NVDIMM controller will automatically be added for you when using the UI. From here, you will be able to see the available amont of PMem storage for consumption, so you can allocate accordingly.

At this point, the rest of the configuration will be within the GuestOS as it will depend on the steps to consume a PMem device. One easy way to verify this workflow is working is by running Nested ESXi as the GuestOS. It turns out ESXi itself can actually consume a Virtual PMem device and after the ESXi VM boots up, you can login to its Embedded Host Client UI and you should see the PMem datastore just like you did for your physical ESXi host 🙂 Hats off to our Engineers for enabling this path, especially for learning/testing purposes.

vPMemDisk Workflow

Create a new VM and you should see an option during the Storage selection to specify either a Standard or PMem datastore. The Latter option will store all VMDKs onto the PMem Datastore and you just need to select a regular vSphere Datastore for the VM home as shown in the screenshot below.

When specifying the capacity of your VMDK, you will also see the available amount of PMem storage that you can allocate from along with the default PMem VM Storage Policy to ensure correct placement of the VMDK.

Note: You can also consume vPMemDisk with an existing VM by attaching a newly created PMem hard disk.

Although the above is purely for education and learning purposes, I am curious to see if folks might consider using vPMemDisk in their vSphere Home Lab environments as a way to easily accelerate certain workloads, perhaps for demos, especially if they have additional memory to spare from their physical ESXi hosts?

Thanks for the comment!

This site uses Akismet to reduce spam. Learn how your comment data is processed.