• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

ssd

Quick Tip – Crucial NVMe SSD not recognized by ESXi 6.7 & 7.0

05/19/2019 by William Lam 65 Comments

If you own or have recently purchased Crucial NVMe SSD such as CT1000P1SSD8 (1TB M.2 NVMe SSD) or CT500P1SSD8 (500GB M.2 NVMe SSD), please be aware that these devices may no be recognized by ESXi after upgrading to the latest release. Thanks to Pete Lindley, (OCTO for End-User Computing), who reached out last week regarding the observation as well as a workaround for the problem. This was also quite timely as I recently purchased a Crucial M.2 NVMe SSD and would have also ran into this problem.

It turns out these Crucial devices were working fine while running on ESXi 6.5 Update 2 but was no longer recognized in latest release of ESXi 6.7 Update 2. It is unclear whether support for these SSDs were removed intentionally or unintentionally, but in either case, these devices are not officially on VMware's Hardware Compatibility List (HCL).

UPDATE (07/29/20) - Over the past few months, I have had a number of folks share feedback that using the trick mentioned below for ESXi 7.0, they have had success of ESXi detecting their NVMe SSD. I wanted to share some of the model and/or vendors that folks have reported success with. I will keep this list updated, so feel free to leave a comment below.

  • OWC Aura Pro X2 2TB NVMe
  • ADATA XPG
  • Sabrent

UPDATE (06/13/20) - Thanks to reader Dave, it looks like this trick also works with ESXi 7.0 but the filename has changed. Simply copy nvme.v00 VIB from the ESXi 6.5 Update 2 and replace it on ESXi 7.0 system (either live under /bootbank or part of the installer) but rename the file to nvme_pci.v00 which is the new filename for NVMe driver.

UPDATE (05/23/19) - After speaking with a few folks who took a closer look, the issue is due to the fact that we added support for NVMe 1.3 spec in latest ESXi 6.7 Update 2 release, but because these are "consumer" devices, they did not conform to the latest specification and hence the driver is unable to claim the device. This is another good reminder when using components not on VMware HCL, this is always a risk from a home lab perspective. In general, I know Samsung and Intel NVMe SSD usually works quite well without issues but always good to do some research. I think Engineering is looking to see if there are other workarounds for the future, but for now, you can use the workaround below.

The easy workaround that Pete found was to simply replace the NVMe driver from ESXi 6.7 Update 2 (1.2.2.27-1vmw.670.2.48.13006603) with one found in ESXi 6.5 Update 2 (1.2.1.34-1vmw.650.2.50.8294253). To so do, simply copy nvme.v00 to /bootbank from either an existing ESXi 6.5 Update 2 system or directly from the ISO. Please note, any future updates or patches to the ESXi host will most likely override the updated driver.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Not Supported, vSphere 6.5, vSphere 6.7, vSphere 7.0 Tagged With: Crucial, ESXi 6.5 Update 2, ESXi 6.7 Update 2, M.2, NVMe, nvme.v00, ssd

SMART drive data now available using vSAN Management 6.6 API

04/19/2017 by William Lam 1 Comment

One of the major storage enhancements that was introduced in vSphere 5.1 as part of the new I/O Device Management (IODM) framework was the addition of SMART (Self Monitoring, Analysis And Reporting Technology) data for monitoring FC, FCoE, iSCSI, SAS protocol statistics, this is especially useful for monitoring the health of an SSD device. Historically, there was not a public vSphere API to consume this information and customers had to rely on ESXCLI which is not very friendly from a programmatic standpoint.


One of the nice enhancements that was introduced in vSAN 6.6 from an API standpoint is that you can now access SMART data using the vSAN Management 6.6 API. The other really cool thing about this enhancement is that although this API was added under the vSAN Management API, you do not actually have to be using vSAN to be able to use this new API!

There are two methods in which you can access the SMART data:

  • vCenter Server - When connecting to a vCenter Server, you can access the VsanQueryVcClusterSmartStatsSummary() method which is available as part of the VsanVcClusterHealthSystem and you simply just provide it the name of a vSphere Cluster.
  • ESXi Host - When connecting directly to an ESXi host, you can access the VsanHostQuerySmartStats() method which is available as part of the HostVsanHealthSystem.

To demonstrate how these two new APIs work, I have create two sample scripts: vsan-smarts-data-sample.py using vSAN Management SDK for Python and VSANSmartsData.ps1 using the new PowerCLI Get-VsanView cmdlet.

Here is an example of running the python sample:

python vsan-smarts-data-sample.py -s 192.168.1.200 -u '*protected email*' -p 'VMware1!' -c VSAN-Cluster


Here is an example of running the PowerCLI sample:

Get-VSANSmartsData -Cluster VSAN-Cluster

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, PowerCLI, VSAN, vSphere 6.5 Tagged With: esxcli, PowerCLI, pyVmomi, SMART, ssd, VSAN 6.6

Apple Mac Pro 6,1 PCIe SSD issue resolved w/ESXi 6.0 Update 2

03/15/2016 by William Lam 6 Comments

Early last year, the new Apple Mac Pro 6,1 (aka black can design) was certified and fully supported on vSphere 6.0 which I had blogged about here. Several months later, customers discovered that some of the newer Mac Pro 6,1 units were shipping with different model of their PCIe SSD device than what was originally released at GA. This was problematic because ESXi was not aware of this newer device and could not detect during or after installation. Although a work around was identified for customers looking to install either ESXi 5.x or 6.x on the newer Apple Mac Pros, it definitely was not ideal.

It has taken a bit longer than expected, but the issue has now been resolved with the latest release of ESXi 6.0 Update 2. A similar fix will be available for customers running ESXi 5.5 in a future update. You can find the direct download for ESXi 6.0 Update 2 in link below which includes a pointer to the release notes in case you are interested in other fixes included in this release.

  • vSphere ESXi 6.0u2 - https://my.vmware.com/web/vmware/details?downloadGroup=ESXI60U2&productId=491&rPId=10348
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, vSphere Tagged With: apple, esxi, mac pro, ssd, vSphere 6.0 Update 2

Quick stats for the VSAN HCL

06/13/2014 by William Lam 3 Comments

I noticed there was a new blog post this morning from Wade Holmes on an update to the VSAN HCL and I thought it might be useful to provide some quick stats on all the partners who have supported components listed on the VSAN HCL such as the storage controllers, SSDs and MDs. As of today (06/13/14), the information below is the latest from the VSAN HCL. I will make adjustments to the Google doc as updates are made to the VSAN HCL.

Disclaimer: The VMware VSAN HCL should still be used as the official source when selecting components for your VSAN environment.

Total VSAN Storage Controllers: 89
GDoc for All VSAN Controllers - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ

Vendor Controllers
Cisco 2
Dell 5
Fujitsu 11
HP 7
IBM 6
Intel 18
LSI 37
SuperMicro 3

Note: If you would like to help contribute to the "Community" VSAN storage controller queue depth list, please take a look at this article for more details.

Total VSAN SSDs: 110
GDoc for All VSAN SSDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=858526558

Vendor SSDs
Cisco 5
Dell 15
EMC 5
Fujitsu 4
Fusion-IO 15
Hitachi 9
HP 15
IBM 9
Intel 12
Micron 7
Samsung 3
SanDisk 6
Virident Systems 5

Total VSAN MDs: 97
GDoc for All VSAN MDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=1993745998 

Vendor MDs
Cisco 8
Dell 20
Fujitsu 13
Hitachi 1
HP 19
IBM 20
Lenovo 3
Seagate 13
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VSAN, vSphere 5.5 Tagged With: esxi 5.5, hdd, md, ssd, storage controller, VSAN, vSphere 5.5

How to run Nested ESXi on the vCloud Hybrid Service?

05/02/2014 by William Lam 7 Comments

nested-esxi-on-vchsToday I was granted access to VMware's vCloud Hybrid Service and the first order of business for me of course, was to provision a Nested ESXi VM! After going through the vCHS UI (which is very slick and easy to use by the way) and the vCloud Director UI, I realized the ESXi guestOS type has not been enabled on the backend of the vCloud Director Database. This totally makes sense, as vCHS is a production ready service and they definitely would not want to run anything that is not officially supported.

Having said that, I can see the benefits to customers who would like build out a Nested ESXi environment on vCHS for lab purposes instead of having to manage their own. Some customers even leverage Nested ESXi as part of their development and testing of software and it can be challenging at times to quickly spin up a brand new environment. Instead, they go to vCHS and with just a couple of of clicks in the UI or automatically using the vCloud APIs, provision a couple of Nested ESXi instances for testing. You can easily discard the resources once you are done or keep them running a bit longer.

Having worked with vCloud Director in the past, I knew that you could import an OVF/OVA and I thought maybe I could just import the Nested ESXi OVF templates that I built and potentially workaround vCHS "limitation" 🙂

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware nor is it supported on vCHS

I tried to upload one of the OVF templates that I built, but it turns out vCloud Director does not supported the Dynamic Disks feature, so I had to perform two additional steps.

Step 1 - Download one of the following Nested ESXi OVF templates

  • Single Nested ESXi VM Template
  • 3-Node VSAN Nested ESXi VM Template
  • 32-Node VSAN Nested ESXi VM Template

Step 2 - Import the OVF template in an existing vSphere environment and ensure you are doing so using the vSphere Web Client, as some of the properties may not be imported properly

Step 3 - Once deployed, go ahead and re-export the image to an OVF/OVA (I choose OVA as it is a single file) and this will generate the empty VMDKs for you so the image should still be very small (< 1MB)

Step 4 - Login into to your vCHS account and  click on your Virtual Datacenter. Select Virtual Machines and then click on Manage in vCloud Director. Import the OVF/OVA that you have just exported

Step 5 - Once the import has been completed, you now have a Virtual Machine that has been configured with the correct guestOS type which should be VMware ESXi 5.x as seen in the screenshot below

nested-esxi-on-vchs-2
Step 6 - At this point, you can either mount an ESXi ISO over your browser or upload it into the vCloud Director Catalog so you can mount it locally and begin your installation of ESXi. Below is a screenshot of 3 Nested ESXi VMs running on vCHS

nested-esxi-on-vchs-3
Note: It looks like some of the advanced VM settings that are part of my OVF template are ignored as part of the vCloud Director import. This means that if you would like to run a Nested VSAN environment on vCHS, you will not be able to rely on the SSD emulation setting but instead, you will need to run through the ESXCLI claim rules to mark particular disks as "SSD" devices. It would have been really nice if vCloud Director would preserve all the advanced VM settings but at least you can still run a Nested VSAN environment.

So there you have it, Nested ESXi running on vCHS! I am kind of curious if this is the first instance of a Nested ESXi VM running on vCHS without having admin access on the backend system?

Note: One limitation to be aware of is that since the backend of vCloud Director is not properly enabled for Nested Virtualization support, this means you will NOT be able to run Nested VMs on top of the Nested ESXi instances. This is due to the lack of having Network Pool which has both Promiscuous & Forge Transmits enabled which is a requirement for proper Nested VMs connectivity. I wonder if vCHS should provide Nested Virtualization capabilities? I know I definitely would like to see it, what do you think? Leave a comment if you have some thoughts on this topic.

UPDATE (05/4/14) - If you wish to run a Nested VSAN environment on vCHS, you will need to take a look at this blog post here on how to "fake" an SSD on one of the devices by using ESXCLI claim rules. The rason for this is that you will not be able to leverage the other method of emulating an SSD device via advanced setting as that requires access to the underlying vSphere environment which you will not have in vCHS.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, VSAN, vSphere Tagged With: esxi, nested, nested virtualization, ssd, vCHS, vcloud hybrid service, VSAN

VSAN Flash/MD capacity reporting

04/29/2014 by William Lam Leave a Comment

One of the capabilities that is available with VSAN when creating a VM Storage Policy is the ability to specify the amount of to Flash to reserve for a Virtual Machine object as a read cache. For Virtual Machines that require high levels of performance, you can assign this policy to the Virtual machine and VSAN will ensure a percentage of the Flash capacity is provided to your workload.

vsan-flash-md-capacity-report-3-NEW
A couple weeks back I was asked whether it was possible to report on the total amount of Flash capacity available to a VSAN Cluster including what has been reserved and in use. I thought that this was a great idea as users would probably want to be able see their utilization over time and ensure they do not over provision their Flash capacity.

For those of you who have used RVC, this information is somewhat available today using the vsan.disks_stats command. The only problem is that this information is only provided at a per device level for each ESXi host and not in an aggregate view for the entire VSAN Cluster.

vsan-flash-md-capacity-report-0
Leveraging the work I had done earlier with exploring the VSAN API and looking at the VSAN component count, I was able to extract the necessary information that I was looking for to provide an aggregate view. To demonstrate this functionality, I have created two sample scripts: vSphere SDK for Perl script called vsanFlashAndMDCapacity.pl and PowerCLI script called vsanFlashAndMDCapacity.ps1

Disclaimer:  These scripts are provided for informational and educational purposes only. It should be thoroughly tested before attempting to use in a production environment.

Both scripts work exactly the same way, you just need to connect it to a vCenter Server that has at least one VSAN Cluster. The script will automatically search for all VSAN enabled vSphere Cluster and provide the following information:

  • Total SSD Capacity
  • Total SSD Reserved Capacity
  • Total SSD Used Capacity
  • Total MD Capacity
  • Total MD Reserved Capacity
  • Total MD Used Capacity

Here is an example screenshot for the vSphere SDK for Perl script:

vsan-flash-md-capacity-report-1
Here is an example screenshot for the PowerCLI script:

vsan-flash-md-capacity-report-2
One question I had myself while looking at the results was regarding the "Used" property and what it meant. I think this is best explained with an example after learning about the details from engineering.

Lets say there are 2 VSAN objects:

  • Object1: Configured size: 100GB, space reservation 10%, actual data written 5GB.
  • Object2: Configured size: 100GB, space reservation 10%, actual data written 15GB.

This would mean:

Object1:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 5GB
Used: 10GB

Object2:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 15GB
Used: 15GB

The "Used" property is then calculated as the MAX(Physical Used, Reserved). I have also shared this information with engineering, perhaps they may consider adding this information to RVC 🙂 If you think this is something you would like to see in RVC, please leave a comment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VSAN, vSphere 5.5 Tagged With: esxi 5.5, flash, PowerCLI, ssd, VSAN, vSphere 5.5, vSphere API

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy