• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vmfs

Thunderbolt 3 enclosures with (Single, Dual & Quad) M.2 NVMe SSDs for ESXi

06/03/2019 by William Lam 15 Comments

Thunderbolt 3 (TB3) and eventually USB 4 is a really fascinating technology and I believe it still has so much untapped potential, especially when looking at Remote/Branch Office (ROBO), Edge and IoT types of deployments. TB3 was initially limited to Apple-based platforms, but in the last couple of years, adoption has been picking up across a number of PC desktop/laptops including the latest generations of Intel NUCs which are quite popular for vSphere/vSAN/NSX Home Labs. My hope with USB 4 is that in the near future, we will start to see servers with this interface show up in the datacenter 🙂

In the mean time, I have been doing some work with TB3 from a home lab standpoint. Some of you may have noticed my recent work on enabling Thunderbolt 3 to 10GbE for ESXi and it should be no surprise that the next logical step was TB3 storage. Using a Thunderbolt interface to connect to external storage, usually Fibre Channel is something many of our customers have been doing for quite some time. In fact, I have a blog post from a few years back which goes over some of the solutions customers have implemented, the majority use case being Virtualizing MacOS on ESXi for iOS/MacOS development. These solutions were usually not cheap and involved a sizable amount of infrastructure (e.g. storage arrays, network switches, etc) but worked very well for large vSphere/MacOS based environments.

[Read more...] about Thunderbolt 3 enclosures with (Single, Dual & Quad) M.2 NVMe SSDs for ESXi

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, VSAN, vSphere Tagged With: homelab, M.2, NVMe, thunderbolt 3, vmfs, VSAN

Configure new automatic Space Reclamation (VMFS UNMAP) using vSphere 6.5 APIs

10/31/2016 by William Lam 6 Comments

Since its first introduction in vSphere 5.5, VMFS UNMAP also know as Space Reclamation for a VMFS based datastore has been a pretty popular Storage capability in vSphere. A commonly asked question from customers is when will the "automatic" capability return? Well, it looks like it is now back in the upcoming vSphere 6.5 release as blogged about here by Duncan Epping. Below is a screenshot of where you can find the setting. VMFS UNMAP is now enabled by default and you will need to have a VMFS 6 datastore to take advantage of this new feature.

vmfs-unmap-vsphere-65-api-0
For customers who wish to automate the configuration of the VMFS UNAMP capability whether that is to check the current settings or to enable/disable it, there are some new vSphere 6.5 APIs that have been introduced which differ from the previous implementations. To change the VMFS UNMAP setting, there is a new vSphere API called UpdateVmfsUnmapPriority() which accepts the UUID of a VMFS 6 datastore as well as an unmapPriority property which can either be "low" which means it is enabled or "none" which means it is disabled. To view the current VMFS UNMAP settings, there is a new property under the Datastore->Info->Vmfs object called UnmapPriority.

To demonstrate this new vSphere API, I have created two small PowerCLI functions called Get-VMFSUnmap and Set-VMFSUnmap which can be downloaded from here.

Here is an example of retrieving the current VMFS UNMAP settings:

Get-Datastore "mini-local-datastore-hdd" | Get-VMFSUnmap

vmfs-unmap-vsphere-65-api-1
Here is an example of enabling automatic VMFS UNMAP setting:

Get-Datastore "mini-local-datastore-hdd" | Set-VMFSUnmap -Enabled $true

vmfs-unmap-vsphere-65-api-2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, vSphere 6.5 Tagged With: PowerCLI, unmap, vmfs, vSphere 6.5, vSphere API

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05/31/2016 by William Lam 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tagged With: cloud native apps, container, Docker, docker volume, esxi, nfs, vmdkops_admin.py, vmfs, VSAN

Quick Tip – How to tell if a VMFS datastore is local or not using new vSphere 5.5 API?

05/27/2014 by William Lam 3 Comments

I was recently working on a script for a friend that collects some basic information about VMFS based Datastores. While going through the vSphere API 5.5 Reference, I noticed a new property was introduced in vSphere 5.5 API called "local". It looks like we now have a simple way of checking whether a given VMFS Datastore is local or not. In the past, the only semi-reliable method for checking whether a VMFS datastore was local or not was to see if the "multipleHostAccess" property was set to true, which meant it was a shared VMFS. This was not very reliable as it could be a remote VMFS Datastore but only exported to one host so far and the other major caveat is that this property was only available when connecting to a vCenter Server.

To demonstrate this new API property, I have created an example vSphere SDK for Python (pyvmomi) sample called: list_datastore_info.py

Here is a screenshot running the script directly against an ESXi host (you can also connect to vCenter Server as well):

pyvmomi-list-datastore-0
The script also supports a --json|-j output option, here is an example of that:

pyvmomi-list-datastore-1
If you want to be able to format the JSON output in a more friendly manner, you can pipe the string output to python -mjson.tool

pyvmomi-list-datastore-2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, vSphere 5.5 Tagged With: datastore, esxi 5.5, local datastore, vmfs, vSphere 5.5

Quick Tip – Correlating VMFS Datastore to Storage Device Using ESXCLI

07/15/2013 by William Lam 1 Comment

There was a question on Twitter this morning from AJ Kuftic on whether it is possible to display the mapping of a VMFS Datastore to its respective storage device using ESXCLI. Josh Coen beat me to the answer this morning, but yes it is possible using ESXCLI. I thought I still share this quick tip as it may not be obvious, especially when you need this information while performing a storage maintenance or troubleshooting with your storage administrator.

For those of you who are familiar with the legacy esxcfg-* commands, this information can be retrieved using the following command:

esxcfg-scsidevs -m

You can also retrieve this same information by using the following ESXCLI command (can be executed remotely as well):

esxcli storage vmfs extent list

As you can see from both screenshots, we can easily identify the name of the VMFS datastore and the specific storage device it is mapped along with other pieces of information. I prefer the ESXCLI method as it is nicely formatted along with the title header for each property.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: datastore, esxcli, esxi, vmfs, vSphere

How to Format and Create VMFS5 Volume using the CLI in ESXi 5

07/19/2011 by William Lam 38 Comments

VMware always recommends formatting and creating a new VMFS volume using the vSphere Client as it automatically aligns your VMFS volume. However, if you do not have access to the vSphere Client or you wanted to format additional VMFS volumes via a kickstart, you can do so using the CLI and the partedUtil under /sbin.

~ # /sbin/partedUtil
Not enough arguments
Usage:
Get Partitions : get
Set Partitions : set ["partNum startSector endSector type attr"]*
Delete Partition : delete Resize Partition : resize
Get Partitions : getptbl
Set Partitions : setptbl

With ESXi 5, an MBR (Master Boot Record) partition table is no longer used and has been replaced with a GPT (GUID Partition Table) partition table. There is also only one block size of 1MB versus the 2,4 and 8 that was available in ESX(i) 4.x

We can view the partitions of a device by using the "getptbl" option and ensure we don't have an existing VMFS volume:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760

Next we will need to create a partition by using the "setptbl" option:

/sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"

The "setptbl" accepts 3 arguments:

  • diskName
  • label
  • partitionNumber startSector endSector type/GUID attribute

The diskName in this example is the full path to the device which is /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0

The label will be gpt

The last argument is actually a string comprised of 5 individual parameters:

  • partitionNumber - Pretty straight forward
  • startSector - This will always be 2048 for 1MB alignment for VMFS5
  • endSector - This will need to be calculated based on size of your device
  • type/GUID - This is the GUID key for a particular partition type, for VMFS it will always be AA31E02A400F11DB9590000C2911D1B8

To view all GUID types, you can use the "showGuids" option:

~ # /sbin/partedUtil showGuids
Partition Type       GUID
vmfs                 AA31E02A400F11DB9590000C2911D1B8
vmkDiagnostic        9D27538040AD11DBBF97000C2911D1B8
VMware Reserved      9198EFFC31C011DB8F78000C2911D1B8
Basic Data           EBD0A0A2B9E5443387C068B6B72699C7
Linux Swap           0657FD6DA4AB43C484E50933C84B4F4F
Linux Lvm            E6D6D379F50744C2A23C238F2A3DF928
Linux Raid           A19D880F05FC4D3BA006743F0F84911E
Efi System           C12A7328F81F11D2BA4B00A0C93EC93B
Microsoft Reserved   E3C9E3160B5C4DB8817DF92DF00215AE
Unused Entry         00000000000000000000000000000000

Once you have the 3 arguments specified, we can now create the partition:

~ # /sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0

UPDATE (01/15) - Here is a quick shell snippet that you can use to automatically calculate End Sector as well as creating the VMFS5 volume:

partedUtil mklabel ${DEVICE} msdos
END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk '{print $1 " \\* " $2 " \\* " $3}') - 1)
/sbin/partedUtil "setptbl" "${DEVICE}" "gpt" "1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0"
/sbin/vmkfstools -C vmfs5 -b 1m -S $(hostname -s)-local-datastore ${DEVICE}:1

Note: You can also use the above to create a VMFS-based datastore running on a USB device, however that is not officially supported by VMware and performance with USB-based device will vary depending on the hardware and the speed of the USB connection. 

We can verify by running the "getptbl" option on the device that we formatted:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Finally we will now create the VMFS volume using our favorite vmkfstools, the syntax is the same as previous release of ESX(i):

~ # /sbin/vmkfstools -C vmfs5 -b 1m -S himalaya-SSD-storage-3 /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0:1
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba1:C0:T2:L0:1" with blockSize 1048576 and volume label "himalaya-SSD-storage-3".
Successfully created new volume: 4dfdb7b0-8c0dcdb5-e574-0050568f0111

Now you can refresh the vSphere Client or run vim-cmd hostsvc/datastore/refresh to view the new datastore that was created.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi Tagged With: esxi5, gpt, partedUtil, usb, vmfs, vSphere 5

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy