• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

esxi4.1

Nested Virtualization Resources

10/04/2012 by William Lam 7 Comments

Here is a consolidated page on all the articles that I have written about the Nested Virtualizatoin (nested ESXi, Hyper-V, etc) and all the goodies that are "Not Supported".

vSphere / vCloud 5.1

  • Having Difficulties Enabling Nested ESXi in vSphere 5.1?
  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

vSphere / vCloud 5.0

  • How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud

Additional Info/Tips/Tricks/

  • Nested ESXi 5.1 Supports VMXNET3 Network Adapter Type
  • How to Configure Nested ESXi 5 to Support EVC Clusters
  • How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5
  • How to Install VMware VSA in Nested ESXi 5 Host Using the GUI
  • Cool Undocumented Features in vCloud Director 1.5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud
  • Nested Virtualization APIs For vSphere & vCloud Director 5.1
  • How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director 
  • Will Intel’s VMCS Shadowing Feature Benefit VMware’s Nested Virtualization?
  • How to run Nested RHEV Hypervisor on ESXi? 
  • How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi
  • How to run Nested ESXi on top of a VSAN datastore? 
  • VMware Tools for Nested ESXi 
  • Why is Promiscuous Mode & Forged Transmits required for Nested ESXi?
  • How to properly clone a Nested ESXi VM?
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: amd-v, ept, esxi, esxi 5, esxi4, esxi4.1, esxi5.1, hyper-v, intel vt, nested, rvi, vhv, virtual hardware virtualization, vSphere, vSphere 4, vSphere 5, vSphere 5.1

Disable LUN During ESXi Installation

04/17/2012 by William Lam 14 Comments

For many of us who worked with classic ESX back in the day, can recall one of the scariest thing during an install/re-install or upgrade of an ESX host that had SAN attached storage, was the potential risk of accidentally installing ESX onto one of the LUNs that housed our Virtual Machines. As a precaution, most vSphere administrators would ask their Storage administrators to either disable/unplug the ports on the switch or temporarily mask away the LUNs at the array during an install or upgrade.

Another trick that gained popularity due to it's simplicity was unloading the HBA drivers before the installation of ESX began and this was usually done as part of the %pre section of a kickstart installation. This would ensure that your SAN LUNs would not be visible during the installation and it was much faster than involving your Storage administrators. With the release of ESXi, this trick no longer works. Though, there have been several enhancements in the ESXi kickstart to allow you to specify specific types of disks during installation, however, it is possible that you could still see your SAN LUNs during the installation.

I know the question about disabling the HBA drivers for ESXi comes up pretty frequently and I just assumed it was not possible. A recent question on the same topic in our internal Socicalcast site got me thinking. With some research and testing, I found a way to do this by leveraging LUN masking at the ESXi host level using ESXCLI. My initial thought was to mask based on the HBA adapter (C:*T:*L:*) and this would still be somewhat manual depending on your various host configurations.

The above solution was not ideal, but with the help from some of our VMware GSS engineers (Paudie/Daniel), they mentioned that you could create claim rules based on variety of criteria, one of which is the transport type. This meant that I could create a claim rule to mask all LUNs that had one of the following supported transport type: block, fc, iscsi, iscsivendor, ide, sas, sata, usb, parallel or unknown.

Here are the following commands to run if you wish to create a claim rule to mask away all LUNs that are FC based:

esxcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
esxcli storage core claimrule load
esxcli storage core claiming unclaim -t plugin -P NMP
esxcli storage core claimrule run

Another option that was mentioned by Paudie, was that you could also mask based on a particular driver, such as the Emulex driver (lpfc680). To see the type of driver a particular adapter is being supported by, you can run the following ESXCLI command:

esxcli storage core adapter list

Here is a screenshot of a sample output:

For more details about creating claim rules be sure to use the --help option or take a look at the ESXCLI documentation starting on pg 88 here.

Now this is great, but how do we go about automating this a bit further? Since the claim rules would still need to be executed by a user before starting an ESXi installation and also removed after the post-installation. I started doing some testing with creating a customized ESXi 5 ISO that would "auto-magically" create the proper claim rules and remove them afterwards and with some trial/error, I was able to get it working.

The process is exactly the same as laid out in an earlier article How to Create Bootable ESXi 5 ISO & Specifying Kernel Boot Option, but instead of tweaking the kernelopt in the boot.cfg, we will just be appending a custom mask.tgz file that contains our "auto-magic" claim rule script. Here is what the script looks like:

Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/ash
 
localcli storage core claimrule add -r 2012 -P MASK_PATH -t transport -R fc
localcli storage core claimrule load
localcli storage core claiming unclaim -t plugin -P NMP
localcli storage core claimrule run
 
cat >> /etc/rc.local << __CLEANUP_MASKING__
localcli storage core claimrule remove -r 2012
__CLEANUP_MASKING__
 
cat > /etc/init.d/maskcleanup << __CLEANUP_MASKING__
sed -i 's/localcli.*//g' /etc/rc.local
rm -f /etc/init.d/maskcleanup
__CLEANUP_MASKING__
 
chmod +x /etc/init.d/maskcleanup

The script above will create a claim rule to mask all FC LUNs before the installation of ESXi starts, this ensure that the FC LUNs will not be visible during the installation. It will also append a claim rule remove to /etc/rc.local which will actually execute before the installation is complete, but does note take effect since it is not loaded. This ensures the claim rule is automatically removed before rebooting and we also create a simple init.d script to clean up this entry upon first boot up. All said and done, you will not be able to see your FC LUNs during the installation but they will show up after the first reboot.

Disclaimer: Please ensure you do proper testing in a lab environment before using in Production.

To create the custom mask.tgz file, you will need to follow the steps below and then take the mask.tgz file and follow the article above in creating a bootable ESXi 5 ISO.

  1. Create the following directory: mkdir -p test/etc/rc.local.d
  2. Change into the "test/etc/rc.local.d" directory and create a script called mask.sh and copy the above lines into the script
  3. Set the execute permission on the script chmod +x mask.sh
  4. Change back into the root of the "test" director and run the following command: tar cvf mask.tgz *
  5. Update the boot.cfg as noted in the article and append mask.tgzto the module list.

Once you create your customized ESXi 5 ISO, you can just boot it up and either perform a clean installation or an upgrade without having to worry about SAN LUNs being seen by the installer. Though these steps are specific to ESXi 5, they should also work with ESXi 4.x (ESXCLI syntax may need to be changed), but please do verify before using in a production environment.

You can easily leverage this in a kickstart deployment by adding the claim rule creation in the %pre section and then adding claim rule removal in the %post to ensure that upon first boot up, everything is ready to go. Take a look at this article for more details for kickstart tips/tricks in ESXi 5.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi Tagged With: esxi 5, esxi4.1, kickstart, ks.cfg, LUN

How to Access USB Storage in ESXi Shell

03/13/2012 by William Lam 27 Comments

While performing some experiments in my home lab, I needed to access a USB storage key directly on my ESXi host (not pass-through to VMs) and found it required a small trick after some tinkering. I thought I share the process in case this comes in handy for others.

Disclaimer: This is mainly for educational and testing purposes as this is not officially supported by VMware. Please use at your own risk.

Before I begin, you should know that only USB storage devices formatted with FAT16 can be accessed in the ESXi shell and is applicable to both ESXi 4.1 and 5.0.

Step 1 - Login to ESXi Shell via SSH and disable the USB Arbitrator service (this is automatically enabled by default to allow pass-through of USB devices to your VMs) using the following command: /etc/init.d/usbarbitrator stop

Step 2 - Plug-in your USB device to your ESXi host and you can verify by using the two ESXCLI commands: verifying the storage device using the command: esxcli storage core device list | grep -i usb or viewing the mounted filesystems using the command: esxcli storage filesystem list

Step 3 - Lastly, after you verify the USB device can be seen by the ESXi host, you can of course browse and access your USB device by looking under /vmfs/volumes/

Te re-enable pass-through of USB devices to your VMs, you just need to start the usbarbitrator service.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Not Supported Tagged With: esxi 5, esxi4.1, lsusb, usb

Datastore File Management using vCLI vifs

03/09/2012 by William Lam Leave a Comment

There are many useful scripts that are bundled with the VMware vCLI, one such script, that is not very well known is the vifs utility which provides datastore file management. When you right click on a datastore and browse using the vSphere Client, you can create a new folder, download/upload, delete and move files.

Using the vCLI's vifs utility, you can perform the same set of operations via the command-line and behind the scenes it uses the vSphere API fileManager to perform these operations. You can also browse datastore by just having access to a web browser, just point it to the following address: https://[ESXI_HOSTNAME]/folder and you can access the datastores by clicking through the links.

To browse the datastore using vifs, you will need vCLI installed on either a Windows/Linux system or you may use VMware vMA.

To browse a specific datastore for an ESXi host, you will need to first list the available datastores by using the following command: vifs --server [SERVER] --username [USERNAME] --listds

Once you have identified the datastore you are interested in, you will then use the --dir flag to list the contents of the directory and their sub-directories by using the following command: vifs --server [SERVER] --username [USERNAME] --dir '[DATASTORENAME]'

Note: The format of the datastore name must be in brackets '[datastorename]' which is how a datastore path is identified in the vSphere API. To list sub-directories, you will need a space between the datastore name and the directory name and do not forget to quote the parameter

Let's say you would like to download the .vmx configuration file for in the directory, you can use --get flag to by using the following command:

vifs --server [SERVER] --username [USERNAME] --get '[DATASTORENAME] somedir/somefile.vmx'

Note: In the example above, we are downloading the file in the current working directory denoted by the "." (period). If you wish to download it somewhere else or even renaming the file, you will need to specify the full path to the destination


If you wanted to automate the downloading of say all .vmx configuration files, it might be pretty tedious to run through the directory discovery, so here is a quick shell script called getVMVMX.sh that is more user friendly that allows you to easily download all .vmx configurations for a given datastore.

To use the script, you will need vCLI installed on either a Linux system or use VMware vMA and be sure to set the executable permission on the shell script. You will need to specify the credentials to the ESX(i) host and the specific datastore you wish to either "list" or "download" all .vmx configuration files.

Using the --listds flag, you will need to identify the datastore you wish to use. Next you will use the following command to "list" all .vmx configuration file: ./getVMVMX.sh [ESXI_SERVER] [USERNAME] "[PASSWORD]" [DATASTORE] list

To download all .vmx configuration file you will use the following command:

./getVMVMX.sh [ESXI_SERVER] [USERNAME] "[PASSWORD]" [DATASTORE] download [FOLDER]

where FOLDER is a directory that will automatically be created for you to store all .vmx configuration files

Note: You can easily modify the script to add an additional "for loop" at the beginning to automatically download .vmx configurations for all datastores. I will leave that as an exercise for the reader.

So if you ever need to grab a vmware.log file for a specific VM or upload an ISO to datastore, you can do so from the command-line using the vifs utility that is bundled with the vCLI.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi 5, esxi4, esxi4.1, esxi5, vcli, vSphere

Ghetto webAccess for ESXi

12/12/2011 by William Lam 14 Comments

I got the idea for this post a few months back after noticing several questions on the VMTN forums on how to enable webAccess for ESXi. With ESXi, the webAccess interface is no longer available as it was with classic ESX. After seeing the question and randomly browsing through the various flings on VMware Labs, I noticed an interesting fling called Ops Panel for ESX. Ops Panel provides a simple javascript that leverages the vSphere MOB to perform basic power operations for virtual machines and it is loaded onto the homepage of a classic ESX host remotely using Greasemonkey.

I immediately wondered if I could run the javascript directly on an ESX or ESXi host without the use of Greasemonkey. With a quick tweak of the default index.html homepage, I was able to get a simple "ghetto" webAccess running on both an ESX and ESXi host. I also ran into several bugs, one that dealt with how the power state of a virtual machines was being captured by the differences in the ESX(i) 4.0, 4.1 and 5.0 APIs and a recent fix to a CSRF (Cross-Site Request Forgery) vulnerability in ESX(i) 4.1 Update 1 that made it difficult to get Ops Panel running on more than just ESX(i) 4.0.

I reached out to the fling creator Ivan Donchev and he was kind enough to help assist me in the issues I ran into and also provided an updated version of his script to properly handle both the power state and the CSRF workaround. He recently published an update to his script a few weeks back supporting both ESX 4 and ESXi 5 but missed ESX(i) 4.1 support due to limited amount of testing. This was an easy fix and I modified the script to include support for ESX(i) 4.1 and also changed the default power off operation to a guestOS shutdown. The modified version of the script can be downloaded here.

When you browse to the homepage of your ESX(i) host using the browser, you will be prompted to login which require the same credentials as if you were to login to the host directly using the vSphere Client or vSphere MOB.

Once you have logged in, it will search for all virtual machines running on the host and generate the list of virtual machines and their respective power states.

You can then perform the appropriate power operation such as a power on, shutdown or suspend using the icons on the right. This can be really useful if you don't have access to vCenter Server, vSphere Client or SSH access to the host but just have a web browser.

To load the Ops Panel script on an ESX(i) host, you will need to do the following:

Note: These instructions are applicable for both ESX and ESXi, but with ESXi, it is important that the commands to copy both the modified index.html and Ops Panel script to docroot are executed as changes are not persisted after a reboot for ESXi hosts.

You can also add this to your kickstart file by appending the lines above in your %firstboot stanza so you automatically get Ops Panel after install. Though this will not give you a full webAccess that classic ESX did but it definitely is a useful way to quickly get to your virtual machines and perform simple power operations using a web browser.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esx4, esx4.1, esxi 5, esxi4.1, kickstart, mob, web access

ghettoVCB + ghettoVCB-restore Updates

11/28/2011 by William Lam 6 Comments

I finally got a chance to finish up the documentation on some of the new feature enhancements and bug fixes for both ghettoVCB and ghettoVCB-restore this weekend. One of the biggest change is both ghettoVCB and ghettoVCB-restore are now bundled together and ghettoVCB-restore is now being version controlled on github just like ghettoVCB. This has been on the backlog for awhile and I am sorry it took this long to get implemented.

Here are the release notes for the enhancement/fixes for both ghettoVCB + ghettoVCB-restore. Hope you enjoy these updates and if you have any issues, please report them on the ghettoVCB VMTN group.

ghettoVCB 

Enhancements:

  • ghettoVCB & ghettoVCB-restore is now packaged together and both scripts are versioned on github
  • ESXi 5 firewall check for email port (Check FAQ #33 for more details)
  • New EMAIL_DELAY_INTERVAL netcat variable to control slow SMTP servers
  • ADAPTER_TYPE (buslogic,lsilogic,ide) no longer need to manually specified, script will auto-detect based on VMDK descriptor file
  • Using symlink -f parameter for quicker unlink/re-link for RSYNC use case
  • Updated documentation, including NFS issues (Check FAQ #19 for more details including new VMware KB 1035332 article)

Fixes:

  • vSphere 4.1 Update 2 introduced new vim-cmd snapshot.remove param, this has now been updated in script to detect this new param change
ghettoVCB-restore

Enhancements:

  • Support for ESX(i) 5.0
  • Combined ghettoVCB + ghettoVCB-restore scripts
  • ghettoVCB-restore is now versioned on github
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi4, esxi4.1, esxi5, ghettoVCB, ghettovcb-restore

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy