As hinted in my earlier blog post, you can indeed setup a vSAN Witness using the ESXi-Arm Fling running on a Raspberry Pi (rPI) 4b (8GB) model. In fact, you can even setup a standard 2-Node or 3-Node vSAN Cluster using the exact same technique. For those familiar with vSAN and the vSAN Witness, we will need to have at least two storage devices for the caching and capacity tier.
For the rPI, this means we are limited to using USB storage devices and luckily, vSAN can actually claim and consume USB storage devices. For a basic homelab, this is probably okay but if you want something a bit more reliable, you can look into using a USB 3.0 to M.2 NVMe chassis. The ability to use an M.2 NVMe device should definitely provide more resiliency compared to a typical USB stick you might have lying around. From a capacity point of view, I had two 32GB USB keys that I ended up using which should be plenty for a small setup but you can always look at purchasing large capacity given how cheap USB devices are.
Disclaimer: ESXi-Arm is a VMware Fling which means it is not a product and therefore it is not officially supported. Please do not use it in Production.
With the disclaimer out of the way, I think this is a fantastic use case for an inexpensive vSAN Witness which could be running at a ROBO/Edge location or simply supporting your homelab. The possibilities are certainly endless and I think this is where the ESXi-Arm team would love to hear whether this is something customers would even be interested in and please share your feedback to help with priorities for both the ESXi-Arm and vSAN team.
In my setup, I have two Intel NUC 9th Pro which make up my 2-Node vSAN Cluster and then an rPI as my vSAN Witness. Detailed instructions can be found below including a video for those wanting to see vSAN Witness in action by actually powering on an actual workload 😀
- Since ESXi-Arm is based on vSphere 7.0, make sure both your vCenter Server Appliance (VCSA) and ESXi-x86 is using 7.0 and NOT 7.0 Update 1
- VCSA 7.0 - (7.0c Build 16749653 or 7.0d Build 16620007)
- ESXi-x86 7.0 - (7.0 Build 15843807 or 7.0b Build 16324942)
Step 1 - Install ESXi-Arm Fling on rPI 4. For detailed instructions, please refer to the ESXi-Arm documentation.
Step 2 - We need to disable the USB Arbitrator service, so that ESXi can see the two USB storage devices. To do so, SSH to rPI and run the following commands:
chkconfig usbarbitrator off
Step 3 - To allow claiming of the USB storage devices for vSAN and the tagging one of the devices for "capacity" tier, the following two ESXi Advanced Settings must be enabled by running the following command:
esxcli system settings advanced set -o /Disk/AllowUsbClaimedAsSSD -i 1
esxcli system settings advanced set -o /VSAN/AllowUsbDisks -i 1
Step 4 - At this point, you will need to identify the device ID of the two USB storage devices (which should not have any partitions) that will be used to construct the vSAN Datastore for the vSAN Witness. To do so, run the following command and make note of the IDs which should be in the form of mpx.vmhbaXX
Step 5 - Next, we need to create a claim rule to add the enable_ssd option for both of our USB storage devices which will then allow us to tag one of the devices for our "capacity" tier. Run the following command and replace the mpx.vmhbaXX with the values in your environment.
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL --device=mpx.vmhba33:C0:T0:L0 --option=enable_ssd
esxcli storage core claiming unclaim --type device --device=mpx.vmhba33:C0:T0:L0
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL --device=mpx.vmhba34:C0:T0:L0 --option=enable_ssd
esxcli storage core claiming unclaim --type device --device=mpx.vmhba34:C0:T0:L0
esxcli storage core claimrule load
esxcli storage core claimrule run
Step 6 - Now we need to tag one of the USB storage devices as our "capacity" tier by running the following command:
esxcli vsan storage tag add -d mpx.vmhba34:C0:T0:L0 -t capacityFlash
If we now re-run the vdq command found in Step 4, you will see both USB storage devices are now seen as an "SSD" and one of them should be marked as IsCapacityFlash as shown in the screenshot below.
Step 7 - Lastly, we need to enable vSAN Traffic on our VMkernel interface, you can do this both in the vSphere UI or via the CLI. In the example here, I am using the CLI since you are already logged into the rPI:
esxcli vsan network ip add -i vmk0
At this point, we are now ready to create our vSAN Cluster using the rPI as a vSAN Witness Node. If you have not already attached the rPI to vCenter Server, go ahead and do so.
Step 7 - In the vSphere Cluster you wish to enable vSAN, select Configure->vSAN->Services and click Configure to start the configuration. In our setup, we will setup 2-Node vSAN Cluster and when asked for the vSAN Witness host, go ahead and locate the rPI which should pass all compatibility checks.