• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Virtual SAN

Restoring VSAN VM Storage Policies without vCenter Part 1: Using cmmds-tool

11/22/2013 by William Lam 5 Comments

A scenario that I have been been looking into recently while testing VSAN in my lab is what happens when vCenter Server is no longer available and the impact that might have on your environment.  We know that VSAN from a configuration perspective works very similiar to vSphere HA where vCenter Server is only required for the initial VSAN Cluster configuration. Once the ESXi hosts has been added to the VSAN Cluster, vCenter Server is no longer part of the picture from a functional perspective and the ESXi hosts will know how to communicate with each other within the VSAN Cluster. We can even build a single VSAN node to help bootstrap vCenter Server itself for greenfield deployments.

So what does that leave us with? Well, the Virtual Machines of course. The Virtual Machines will continue to run without any impact whether or not vCenter Server is available. VSAN will continue to govern and maintain compliance for the VM Storage Policies that have been assigned to each and every Virtual Machine. However, in the scenario where you can not restore vCenter Server which is primarily where the VM Storage Policies are stored and you need to build out a new environment, how do you go about restoring the VM Storage Policies?

Well it turns out that vCenter Server is not the only place where the VM Storage Policies are stored at. To ensure that VSAN can continue enforcing the policies that have been assigned to each Virtual Machine and their associated VMDKs, there is a copy of the VM Storage Policies that is distributed amongst all the ESXi hosts within the VSAN Cluster. In this first first article I will demonstrate how to recover the VM Storage Policies for a particular Virtual Machine running on an ESXi host where vCenter Server is no longer available using a utility located in the ESXi Shell called cmmds-tool. In part two of the article I will demonstrate the same recovery process but leveraging the vSphere API which will be more user friendly.

Disclaimer: The cmmds-tool is not meant for troubleshooting, you should only use under VMware GSS/Engineering supervision. If you choose to use it, do so at your own risk.

In the ESXi Shell, there is a nifty little VSAN utility called cmmds-tool which stands for Clustering Monitoring, Membership and Directory Services. This tool allows you to perform a variety of operations and queries against the VSAN nodes and their associated objects. One interesting command is the "find" operation which will allow us to lookup a specific VM Storage Policy, a bit more on this later.

Lets say we have a Virtual Machine called VSAN-VM-1 and it is associated with three VM Storage Policies called Copper, Aluminum and Platinum. We have one for the VM Home and one for each of the two VMDKs. Here is a screenshot of what that looks like in the vSphere Web Client:

Now lets say vCenter Server is some how lost or unrecoverable for whatever reason, but we still have access to the ESXi host and the running Virtual Machine. Lets go ahead and recover the VM Storage Policies so we can then rebuild a new vCenter Server and re-create the policies.

Step 1 - We need to first identify a couple of pieces of information. The first is going to be the UUID of the VM Home directory (VSAN uses with UUIDs for all its objects). Login to ESXi Shell of the ESXi host that is currently hosting the Virtual Machine and run the following command:

vim-cmd vmsvc/getallvms | grep [DISPPLAY-NAME-OF-YOUR-VM]

The VM Home directory UUID will be part of the Virtual Machine directory name which can be seen in the screenshot above highlighted in green. Make a note of that UUID as you will need it in a later step. You should also make a note of the Virtual Machine MoRef ID which is the first numeric value on the left hand side of the output. In this example, I have 1 as the MoRef ID

Step 2 - Next we need to identify the UUID for each of the VMDKs for that given Virtual Machine. To do so, we need to take a look at the descriptor file for each of the VMDKs in the Virtual Machine home directory. You can use vim-cmd vmsvc/get.filelayout [VM-MOREF-ID] to get the VMDK paths or you can change into the Virtual Machine directory and cat out the files. In my example I have the following two VMDK descriptor files:

/vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1.vmdk
/vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1_1.vmdk

You can just grep for the keyword "vsan" by using the following command (replacing the path of your VMDKs):

grep "vsan" /vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1.vmdk

From the output you will see vsan:// and UUID associated with each VMDK, please make a note of the UUID for each VMDK. We are now ready to query the VM Storage Policy configuration which will help us rebuild the policy in our new vCenter Server.

Step 3 - To look up the VM Home VM Storage Policy, run the following command and specify the UUID of the VM Home in Step 1:

cmmds-tool find -t POLICY -u 51108952-6e91-b30b-a5ab-005056ad9acf -f json

The VM Storage Policy configurations is stored in the "content" field and you will need to translate the properties back to the VSAN policy you have defined. As part of the output you will also see a property called spbmProfileId which is the unique identifier for VM Storage Policy which you can query if you are using the VM Storage Policy APIs that were introduced in vSphere 5.5.

Here is a table that will help you translate the keys to the apporopirate VSAN Policies:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Step 4 - To lookup the VMDK VM Storage Policies, we will perform the same command and just replace the UUID with our VMDK UUIDs.

Once you have recorded the configurations for each of the VM Storage Policy, you can then head over to your new vCenter Server and re-create the VM Storage Policies and then re-associate the policy with the Virtual Machines.

As you can see the steps to recover a VSAN VM Storage Policy is not too difficult but can be a bit tedious. In the next article, we will simplify this by leveraging the vSphere API which has access to the same CMMDS system but make querying the VM Storage Policy super easy by only requiring the user to provide the name of the Virtual Machine.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VSAN, vSphere 5.5 Tagged With: cmmds-tool, esxi 5.5, Virtual SAN, vm storage policy, vm storage profile, VSAN, vSphere 5.5

How to run Nested ESXi on top of a VSAN datastore?

11/07/2013 by William Lam 29 Comments

Today I found an interesting article on my Twitter timeline regarding some issues when trying to install and run Nested ESXi on top of a VSAN datastore. Shortly after the ESXi installation begins, the following error message is observed:

This program has encountered an error:

Error (see log for more info):
Could not format a vmfs volume.
Command ‘/usr/sbin/vmkfstools -C vmfs5 -b 1m -S datastore1
/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:3′ exited with status 1048320

This of course peaked my interest given the topic and I would have expected this to just work and thought this might have been some miss-configuration. I decided to try this out in my lab and to my surprise, I encountered the exact same problem.

Here is a quick screenshot of error message:

I pinged a couple of folks from the VSAN development team to see if this was a known issue and if so, why was it occurring? After a couple of email exchanges, it turns the problem is with a SCSI-2 reservation being generated as part of creating a default VMFS datastore. Even though VMFS-5 no longer uses SCSI-2 reservations, the underlying LVM (Logical Volume Manager) driver for VMFS still requires it. Since VSAN does not make use of SCSI-2 reservations, it did not make sense to support it and hence the issue. Having said that, since Nested Virtualization is heavily used at VMware, the VSAN development team has come up with a nifty solution as they too hit this problem early on during the development of VSAN. Big thanks to Christian Dickmann (Tech Lead on the VSAN Engineering team) for providing this little tidbit.

Disclaimer: Nested Virtualization is not officially supported by VMware nor are the configuration changes described below, please use at your own risk.

To get around this problem, the VSAN team added in an advanced ESXi setting that would "fake" SCSI Reservations and this needs to be configured for the ESXi hosts providing up the VSAN datastore.

Run the following ESXCLI command (either locally on ESXi Shell or remotely)

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

A system reboot is not required and this change can be done live on the ESXi host prior to starting the ESXi installation. Once this is done, you will now be able to proceed with installing Nested ESXi on top of a VSAN datastore. Here is a screenshot of my Nested ESXi VM running on top of Nested ESXi VSAN datastore 🙂

Hopefully this workaround will be useful for anyone running VSAN and would like to fully make use of this storage by running Nested ESXi for development or testing.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Nested Virtualization, VSAN, vSphere 5.5 Tagged With: esxi 5.5, nested, nested virtualization, scsi reservation, Virtual SAN, VSAN, vSphere 5.5

Additional steps required to completely disable VSAN on ESXi host

09/26/2013 by William Lam 11 Comments

Something that I had noticed while working with VSAN in my lab is that when you disable VSAN on your vSphere Cluster, the disks that were used for VSAN in each of the ESXi hosts were no longer available for use afterwards. If you want to use one of the disks for creating a regular VMFS volume or even use it for for vSphere Flash Read Cache, the disks would not show up as an available device. The reason this is occurring is that the disks still contains a VSAN partition and this is not automatically removed when disabling VSAN.

You can view the partition details by using the partedUtil and specifying the "getptbl" option and the device.

Now I could use partedUtil to clear the partition, but there is actually a nice ESXCLI command that can be used to remove the disks used in a VSAN disk group and this will automatically clear the VSAN partition. The ESXCLI command is:

esxcli vsan storage remove -s [SSD-DEVICE-ID]

When I tried to run the command, I was surprised to get the following error message:

Unable to remove device: Can not destroy disk group for SSD naa.6000c29c581358c23dcd2ca6284eec79 : storage auto claim mode is enabled

It turns out when you use "Automatic" claiming mode when enabling VSAN on your vSphere Cluster, that configuration is left enabled on the ESXi host even when disabling VSAN. This then prevents you from destroying the disk group. So there is an extra step required if you choose automatic mode and you will need to run the following ESXCLI command to disable it:

esxcli vsan storage automode set --enabled false

If you are not sure, you can always perform a "get" operation to check whether automatic claim mode is enabled. Once that has been disabled, you will now be able to destroy the diskgroup by running the original command above:

The remove operation only requires the SSD device front-ending the VSAN disk group and you can identify the SSD by running "esxcli vsan storage list". I did find it odd that disabling VSAN in your vSphere Cluster did not completely disable the automatic mode on the ESXi host and I have already filed a bug request to get that fix.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VSAN, vSphere 5.5 Tagged With: esxcli, esxi 5.5, Virtual SAN, VSAN, vSphere 5.5

How to bootstrap vCenter Server onto a single VSAN node Part 2?

09/09/2013 by William Lam 47 Comments

In this article, I will provide a step by step walk through on how to setup and configure single VSAN node that will allow you to deploy a vCenter Server onto a VSAN datastore. This initial "bootstrapping" can help when initially building out your VSAN cluster and can come in handy for greenfield deployments and potentially for brownfield deployments as well. Before getting started, make sure you have taken a look at How to bootstrap vCenter Server onto a single VSAN node Part 1.

Environment:

  • 3 physical host
  • Each host as a small iSCSI boot LUN for ESXi installation (this can be another local disk or USB/SD card)
  • Each host has single SSD and SATA disk (minimum)

Step 1 -  Install ESXi 5.5 onto your physical hosts, we technically only need one host to begin the process but you will probably want to have two additional hosts ready unless you do not care about your vCenter Server being able to recover if there is any hardware issues.

Step 2 - You will need to modify the default VSAN storage policy on the ESXi host in which you plan to provision your vCenter Server. It looks like this behavior changed during the VSAN beta and when VSAN was GA'ed yesterday with vSphere 5.5 Update 1. You will need to run the following two ESXCLI commands to enable "force provisioning":

esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

You can confirm you have the correct VSAN default policy by running the following ESXCLI command:

~ # esxcli vsan policy getdefault
Policy Class  Policy Value
------------  --------------------------------------------------------
cluster       (("hostFailuresToTolerate" i1))
vdisk         (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
vmnamespace   (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
vmswap        (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))

We start off with our first ESXi host and as you can see from the screenshot below, we do not have additional datastores that we can use to provision our vCenter Server.

Step 3 - You will need to identify the disks that you will be using on the first ESXi host to contribute to the VSAN datastore. You can do so by running the following ESXCLI command:

esxcli storage core device list

To get specific details on a particular device such as identifying whether it is an SSD or regular HDD, you can specify the -d option and the device name.

Once you have identified the disks you will be using, make a note of the the disks names as they will be needed in the upcoming steps. As mentioned in my environment, I only have a single SSD and single HDD and their respective device names are naa.50026b72270126ff & naa.5000c500331bca77

Step 4 - Before we can create our VSAN datastore, we need to first create a VSAN cluster. One of the parameters that is needed when going through this "bootstrapping" method without a vCenter Server is a unique UUID to identify the VSAN cluster. The UUID is in the format of "nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn" where n is a hexidecimal value. You can easily generate one within the ESXi Shell by leveraging the following Python snippet

python -c 'import uuid; print(str(uuid.uuid4()));'

Step 5 - To create a VSAN cluster, we will use the following ESXCLI command and specify the UUID from the previous step for the -u option:

esxcli vsan cluster join -u UUID

UPDATE (02/11/15) - In vSphere 6, you no longer have to perform step 4 to generate a UUID. There is now a new ESXCLI command which will automatically create a VSAN Cluster and generate a UUID automatically by running the following command:

esxcli vsan cluster new

Once the VSAN cluster has been created, you can retrieve information about the VSAN cluster by running the following ESXCLI command:

esxcli vsan cluster get

Step 6 - Next we need to add the disks from our ESXi host to create our single node VSAN datsatore. To do so, we will need the disk device names from our earlier step for both SSD and HDDs and run the following ESXCLI command:

esxcli vsan storage add -s SSD-DISK-ID -d HDD-DISK-ID

The -d option specifies regular HDD disks and the -s option specifies an SSD disk. If you have more than one HDD disk, you will need to specify multiple -d entries. You can also take a look at the disks being contributed to the VSAN datatore by running the following ESXCLI command:

esxcli vsan storage list

Step 7 - To save us one additional step, you can also enable the VSAN traffic type on the first ESXi host using ESXCLI and you can also do this for the other two hosts in advance. This step does not necessary have to be done now as it can be done later when the vCenter Server is available and using the vSphere Web Client. You will need to either create or select an existing VMkernel interface to enable the VSAN traffic type and you can do so by running the following ESXCLI command:

esxcli vsan network ipv4 add -i VMK-INT

At this point, you now have a valid VSAN datastore for your single ESXi host! You can verify this by logging into the vSphere C# Client and you should see the VSAN datastore mounted to your ESXi host.

At this point, you are now ready to deploy your vCenter Server 5.5 onto the VSAN datastore. The next series of steps outline the deployment of the VCSA for completeness of the article.

Step 8 - Deploy the VCSA 5.5 OVA/OVF onto the VSAN datastore and power on the VM.

UPDATE: You skip Steps 9-11 by leveraging ovftool 4.0 to inject the required OVF properties when deploying the VCSA, take a look at this article for more details.

Step 9 - Since you can not configure the OVF properties for the VCSA, you will notice that networking is not configured (unless you happen to have DHCP on the network). If you are like most Enterprise customers, you will not have DHCP running in your environment and you will need to configure a static IP.

Step 10 - Login to the VCSA console and we will use the following VAMI CLI /opt/vmware/share/vami/vami_set_network to configure the IP Address for the VCSA. Here is an example of what that command would look like:

/opt/vmware/share/vami/vami_set_network eth0 STATICV4 172.24.68.14 255.255.255.0 172.24.68.1

For more details on the syntax, you can refer to this blog article here. At this point, you should be able to ping your VCSA and verify connectivity.

Step 11 (Optional) - In addition to IP connectivity, you may also want to configure your DNS Server and DNS search domain before configure the VCSA application. You can also do this by using the following VAMI CLI /opt/vmware/share/vami/vami_set_dns and for search domain, you would need to add the entry to /etc/resolve.conf

Step 12 - You now are ready to configure the VCSA. Open a browser and connect to https://[VCSA-IP]:5480 and proceed through the VCSA setup wizard.

Step 13 - Once the VCSA has been configured, you can now login to the vSphere Web Client and create a Datacenter object and then a vSphere Cluster and enable VSAN. Make sure you also enter your VSAN beta license key under the "Manage" section of the vSphere Cluster before you can use VSAN.

Step 14 - Add all three of your ESXi hosts to the vSphere Cluster. If you recall earlier we had enabled the VSAN traffic type on our first ESXi host and if you did not run the command on the remainder ESXi hosts, you will need to do so using the vSphere Web Client under the "Networking" section of each ESXi host

Step 15 - Once all three ESXi hosts have been added to the vSphere Cluster, we should now see their local storage contributed to the VSAN datastore under the "General" tab

Step 16 (Optional) - If for whatever reason the disks do not get automatically claimed, you can click on "Disk Management" and manually claim them. If you selected "Automatic" mode when enabling VSAN, the disks on each ESXi host should automatically be handled by VSAN. However, they may not be claimed if the disks are being seen as "remote" versus "local" devices.

Step 17 - The final thing I would recommend is to configure the VCSA to automatically startup and shutdown when the ESXi host reboots. To do so, login to the ESXi host using the vSphere C# Client and click on "Virtual Machine Startup/Shutdown" under the Configuration tab.

So there you have it! You are now running the vCenter Server on top of the VSAN datastore without having to initially setup a local VMFS or rely on an external NFS volume to deploy your vCenter Server and build up to the full VSAN cluster. By leveraging this bootstrap method, you can easily standup a fully self contained storage and compute cluster which is ideal for an SMB or ROBO environment. The best part of about this setup is that the VCSA will use the default VSAN storage policy which is to tolerate at least one failure and as you add your 2nd and 3rd ESXi host, you will automatically have resiliency for the VCSA.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VCSA, VSAN, vSphere 5.5, vSphere 6.0 Tagged With: esxcli, esxi 5.5, vcsa, vcva, Virtual SAN, VSAN, vSphere 5.5

How to bootstrap vCenter Server onto a single VSAN node Part 1?

09/06/2013 by William Lam 18 Comments

By now, I am sure you have heard about VMware Virtual SAN (VSAN) and you are probably anxious to give it a spin once the beta becomes publicly available in the very near future. I have been doing some testing in my lab with VSAN, not Nested VSAN, but on actual physical hardware. While getting started, I hit an interesting challenge given my physical hardware configuration and also this being a greenfield deployment.

Let me explain by what I mean by this. In my lab, I have three physical hosts and each contains a single SSD and single SATA drive. Each host has been provisioned with a small 5GB iSCSI boot LUN that is used to install ESXi (this could have also been another local disk or even USB/SD card). Though VSAN itself is built into the VMkernel, the management of the VSAN cluster, configurations and policies are all performed through vCenter Server. So for a greenfield deployment, you would need to first deploy a vCenter Server which would then require you to consume at least one of the local disks. This is the good ol chicken and egg problem!

In my environment, this was a problem because I only have a single SSD and SATA disk and I would not be able to setup a VSAN datastore for all three hosts at once. This meant I had to do the following steps:

  1. Create a local VMFS volume on the first ESXi host
  2. Deploy vCenter Server and then create a VSAN Cluster
  3. Add the two other ESXi host to the VSAN Cluster
  4. Storage vMotion the vCenter Server to the VSAN Datastore
  5. Destroy the local VMFS datastore on first ESXi host (existing VMFS partitions will not work with VSAN) & delete partitions
  6. Add the first ESXi host to VSAN Cluster

As you can see this can get a bit complicated and potentially error prone when needing to destroy VMFS volumes ...

I figured there had to be a better way and I was probably not going to be the only one hitting this scenario for a greenfield and even potentially for a brownfield deployments. In talking to Christian Dickmann, a Tech Lead for the VSAN project, I learned about a really cool feature of VSAN in which you can actually bootstrap vCenter Server onto a single VSAN node! This was possible due to the tight integration of VSAN within the VMkenel and best part about this solution is that it is fully SUPPORTED by VMware. From an operational perspective, this deployment workflow is much easier and intuitive than the process listed above. This also allows you to maximize the use of your hardware investment by running both your core infrastructure VMs as well as your regular workloads all on the VSAN datastore which is great for small or ROBO offices.

In my environment, I start out with a single ESXi 5.5 host which contains a single SSD and SATA disk and I create single VSAN node from that ESXi host and contribute its storage to the VSAN datastore. I then deploy a vCenter Server for which I am using the VCSA (vCenter Server Appliance) for quick and easy deployment. The default policy for VSAN is to automatically ensure there is at least one additional replica of the VM as new ESXi compute nodes join the VSAN cluster.

Once the vCenter Server is online, I can then create a vSphere Cluster and enable it with VSAN and add all three ESXi 5.5 hosts to the vSphere Cluster. This will then contribute all their storage to the VSAN datastore all while the vCenter Server is happily running. Once the other ESXi hosts join the VSAN cluster, we will automatically get replication between the other nodes to ensure our vCenter Server is replicated and of course you can change this policy.

As you can see this is much simpler setup than having to start out with an existing VMFS or even NFS datastore to initially store the vCenter Server and then create the VSAN datstore and migrate the vCenter Server. I also like how I can start deploying my infrastructure with a single ESXi host and then slowly bring in additional ESXi hosts (just make sure you do it in timely fashion as you have a SPOF until then). In part two of this article, I will go into more details on how to configure the single VSAN node and bootstrap vCenter Server. In the meantime, if you have not checked out these awesome articles by some of my VMware colleagues, I would highly recommend you give them a read, especially Cormac's awesome VSAN series!

Here is How to bootstrap vCenter Server onto a single VSAN node Part 2?

If you are interested in testing out VSAN, be sure to sign up for the beta here!

Cormac Hogan

  • VSAN Part 1 – A first look at VSAN
  • VSAN Part 2 – What do you need to get started?
  • VSAN Part 3 – It is not a Virtual Storage Appliance
  • VSAN Part 4 – Understanding Objects and Components
  • VSAN Part 5 – The role of VASA

Duncan Epping

  • Introduction to VMware vSphere Virtual SAN
  • How do you know where an object is located with Virtual SAN?

Dave Hill

  • VMware VSAN – Virtual SAN – How to configure
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VCSA, VSAN, vSphere, vSphere 5.5 Tagged With: esxcli, esxi 5.5, vcsa, vcva, Virtual SAN, VSAN, vSphere 5.5

How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi

09/02/2013 by William Lam 48 Comments

Last week at VMworld 2013, VMware announced the release of vSphere 5.5 which includes a variety of exciting new features.  One of the most anticipated feature introduced in this release is VMware Virtual SAN (VSAN) which will be available initially as a public beta. One question that I heard repeatedly throughout the VMworld conference was whether it would be possible to test VSAN in a nested ESXi environment? The answer is absolutely! This is a great way to learn about VSAN and how works from a functional perspective before procuring the necessary hardware.

Disclaimer: Running VSAN in a nested ESXi environment is not officially supported nor is it a replacement for actual testing on actual physical hardware.

Before getting started, I would highly recommend you check out the following resources from my good friend Cormac Hogan which includes a detailed VSAN walk through as well what looks to be an awesome series of articles on how VSAN works:

  • VSAN Walkthrough
  • VSAN Part 1 - A first look at VSAN
  • VSAN Part 2 - What do you need to get started

Requirements:

  • Environment running either vSphere 5.1 or 5.5 and access to the vSphere Web Client.

Configuration:

Nested ESXi VM configured with the minimal resources:

  • 2 vCPU
  • 5GB Memory (ESXi 5.5 now requires a minimum of 4GB vs 2GB as with previous releases but VSAN requires minimum of 5 with recommended 6)
  • 2GB Disk for ESXi 5.5 installation
  • 4GB Disk for an "Emulated" SSD
  • 8GB Disk for HDD

Easy Method:

Instead of having you go through the process of building a Nested ESXi VM with all the prerequisites that includes steps from here and here. I have pre-built a VSAN Nested ESXi VM template (217Kb) that you can just download and import into your environment and being the installation process.

Download either:

  • Single VSAN Nested ESXi VM Template
  • 3-Node VSAN Nested ESXi VM Template
  • 32-Node VSAN Nested ESXi VM Template

and connect to your vCenter Server 5.1 or 5.5 using the vSphere Web Client and import the OVF into your environment (do not use the vSphere C# Client as the import does not persist VHV configuration). Once you have imported the VM, you can then mount the ESXi 5.5 ISO and begin the installation. All three VMDKs have been thin provisioned and you can change the capacity during deployment.

Slightly Harder Method:

If you wish to build the Nested ESXi VM yourself, then you can follow these instructions:

Step 1 - Create a new VM and when you get to the compatibility screen, select either "ESXi 5.1 or greater" or "ESXi 5.5 or greater" depending on the version of vSphere you are running

Step 2 - For the GuestOS select "Other" and "Other (64-bit)"

Step 3 - We will need to customize the following virtual hardware configuration:

  • Change vCPU to 2
  • Click on CPU drop down and enable "Expose hardware assisted virtualization to the guest OS"
  • Change Memory to 4GB
  • Change the initial VMDK to 2GB or whatever value you wish to use for ESXi installation
  • Add second VMDK with 4GB or whatever value you wish to use for "emulated" SSD
  • Add third VMDK with 8GB of whatever value you wish to use for the HDD
  • Click on the VM Options tab at the top and select the "Advanced" drop down box. We will need to add the following entry scsi0:1.virtualSSD = 1 For more details please refer to this article

Step 4 - Click okay to provision the VM and once it has been deployed you will need to re-configure the guestOS to "VMware ESXi 5.x" using the vSphere C# Client for vSphere 5.1 or vSphere Web Client for vSphere 5.5. At this point, you will have the same VM image as in the Easy Method and you are now ready to install ESXi 5.5

When you install ESXi 5.5, you should see the following three disks as shown in the screenshot below, ensure you install ESXi on the 2GB disk:

Prior to enabling VSAN on the particular vSphere Cluster, make sure you enable the new VSAN traffic type on one of your VMkernel interfaces for each of your ESXi hosts, this is required for VSAN communication.

If all the prerequisites have been met, you can now easily enable VSAN by simply checking the VSAN box when editing the vSphere Cluster. In just a few minutes you should see diskgroups automatically created (assuming you selected Automatic mode) consuming both the emulated SSD and HDD and the creation of the vsanDatastore which will be available on all ESXi hosts within that vSphere Cluster.

You can also use the same method for emulating an SSD running in a Nested ESXi to functional test the new VMware Flash Read Cache (vFRC) feature.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VSAN, vSphere 5.5 Tagged With: nested, ssd, vflash, vFRC, Virtual SAN, VSAN, vSphere 5.5

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 3
  • Go to page 4
  • Go to page 5

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy