For many of us who worked with classic ESX back in the day, can recall one of the scariest thing during an install/re-install or upgrade of an ESX host that had SAN attached storage, was the potential risk of accidentally installing ESX onto one of the LUNs that housed our Virtual Machines. As a precaution, most vSphere administrators would ask their Storage administrators to either disable/unplug the ports on the switch or temporarily mask away the LUNs at the array during an install or upgrade.

Another trick that gained popularity due to it's simplicity was unloading the HBA drivers before the installation of ESX began and this was usually done as part of the %pre section of a kickstart installation. This would ensure that your SAN LUNs would not be visible during the installation and it was much faster than involving your Storage administrators. With the release of ESXi, this trick no longer works. Though, there have been several enhancements in the ESXi kickstart to allow you to specify specific types of disks during installation, however, it is possible that you could still see your SAN LUNs during the installation.

I know the question about disabling the HBA drivers for ESXi comes up pretty frequently and I just assumed it was not possible. A recent question on the same topic in our internal Socicalcast site got me thinking. With some research and testing, I found a way to do this by leveraging LUN masking at the ESXi host level using ESXCLI. My initial thought was to mask based on the HBA adapter (C:*T:*L:*) and this would still be somewhat manual depending on your various host configurations.

The above solution was not ideal, but with the help from some of our VMware GSS engineers (Paudie/Daniel), they mentioned that you could create claim rules based on variety of criteria, one of which is the transport type. This meant that I could create a claim rule to mask all LUNs that had one of the following supported transport type: block, fc, iscsi, iscsivendor, ide, sas, sata, usb, parallel or unknown.

Here are the following commands to run if you wish to create a claim rule to mask away all LUNs that are FC based:

esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t plugin -P NMP
esxcli storage core claimrule run

Another option that was mentioned by Paudie, was that you could also mask based on a particular driver, such as the Emulex driver (lpfc680). To see the type of driver a particular adapter is being supported by, you can run the following ESXCLI command:

esxcli storage core adapter list

Here is a screenshot of a sample output:

For more details about creating claim rules be sure to use the --help option or take a look at the ESXCLI documentation starting on pg 88 here.

Now this is great, but how do we go about automating this a bit further? Since the claim rules would still need to be executed by a user before starting an ESXi installation and also removed after the post-installation. I started doing some testing with creating a customized ESXi 5 ISO that would "auto-magically" create the proper claim rules and remove them afterwards and with some trial/error, I was able to get it working.

The process is exactly the same as laid out in an earlier article How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Option, but instead of tweaking the kernelopt in the boot.cfg, we will just be appending a custom mask.tgz file that contains our "auto-magic" claim rule script. Here is what the script looks like:

The script above will create a claim rule to mask all FC LUNs before the installation of ESXi starts, this ensure that the FC LUNs will not be visible during the installation. It will also append a claim rule remove to /etc/rc.local which will actually execute before the installation is complete, but does note take effect since it is not loaded. This ensures the claim rule is automatically removed before rebooting and we also create a simple init.d script to clean up this entry upon first boot up. All said and done, you will not be able to see your FC LUNs during the installation but they will show up after the first reboot.

Disclaimer: Please ensure you do proper testing in a lab environment before using in Production.

To create the custom mask.tgz file, you will need to follow the steps below and then take the mask.tgz file and follow the article above in creating a bootable ESXi 5 ISO.

  1. Create the following directory: mkdir -p test/etc/rc.local.d
  2. Change into the "test/etc/rc.local.d" directory and create a script called and copy the above lines into the script
  3. Set the execute permission on the script chmod +x
  4. Change back into the root of the "test" director and run the following command: tar cvf mask.tgz *
  5. Update the boot.cfg as noted in the article and append mask.tgzto the module list.

Once you create your customized ESXi 5 ISO, you can just boot it up and either perform a clean installation or an upgrade without having to worry about SAN LUNs being seen by the installer. Though these steps are specific to ESXi 5, they should also work with ESXi 4.x (ESXCLI syntax may need to be changed), but please do verify before using in a production environment.

You can easily leverage this in a kickstart deployment by adding the claim rule creation in the %pre section and then adding claim rule removal in the %post to ensure that upon first boot up, everything is ready to go. Take a look at this article for more details for kickstart tips/tricks in ESXi 5.

13 thoughts on “Disable LUN During ESXi Installation

  1. While very cool, doesn’t this accomplish essentially the same thing?
    install –firstdisk=ata_piix
    or in ESXi 4.1
    autopart –firstdisk=ata_piix

    Above obviously assumes same hardware configs using your examples.

    I’m asking out of complete ignorance. :)

    • @Chris,

      This is one of the enhancements I alluded to in the article for kickstart and it doesn’t always work from what I hear. Also, I’ve seen many requests that they just don’t want those LUNs to be visible at all in case someone goes through a manual installation/upgrade which is where the initial request came from for customized ISO.

  2. Nice script, minor corrections needed though:

    Your first code block should read:

    esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc

    and so on, instead of ‘esxclicli’. Looks like someone replaced ‘local’ in ‘localcli’ with ‘esxcli’, instead of just ‘esx’ ;)

    Similar issue in your script:

    “sed -i ‘s/esxcli.*//g’ /etc/rc.local”
    will not replace anything in your rc.local, as you’re using localcli. Other than that very nice!

  3. Sorry for the late post but if I wanted to mask away all iSCSI LUNs instead of FC would the command be as follows? There were two options for -R iscsi and iscsivendor according to the 5.5 documentation iscsi variable isn’t currently being used. Therefore I used the variable iscsivendor.


    localcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R iscsivendor
    localcli storage core claimrule load
    localcli storage core claiming unclaim -t plugin -P NMP
    localcli storage core claimrule run

    cat >> /etc/rc.local < /etc/init.d/maskcleanup << __CLEANUP_MASKING__
    sed -i 's/localcli.*//g' /etc/rc.local
    rm -f /etc/init.d/maskcleanup

    chmod +x /etc/init.d/maskcleanup

    • Sorry, this is not something I’ve tried specifically. The best advice I can give is to just test this in a lab environment, you should know fairly quickly as you can jump into the console and check to see if any of your iSCSI LUNs are visible during the install and you can always just add a sleep in “pause” the install while you check.

Thanks for the comment!