• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

ESXi

How to Access USB Storage in ESXi Shell

03/13/2012 by William Lam 27 Comments

While performing some experiments in my home lab, I needed to access a USB storage key directly on my ESXi host (not pass-through to VMs) and found it required a small trick after some tinkering. I thought I share the process in case this comes in handy for others.

Disclaimer: This is mainly for educational and testing purposes as this is not officially supported by VMware. Please use at your own risk.

Before I begin, you should know that only USB storage devices formatted with FAT16 can be accessed in the ESXi shell and is applicable to both ESXi 4.1 and 5.0.

Step 1 - Login to ESXi Shell via SSH and disable the USB Arbitrator service (this is automatically enabled by default to allow pass-through of USB devices to your VMs) using the following command: /etc/init.d/usbarbitrator stop

Step 2 - Plug-in your USB device to your ESXi host and you can verify by using the two ESXCLI commands: verifying the storage device using the command: esxcli storage core device list | grep -i usb or viewing the mounted filesystems using the command: esxcli storage filesystem list

Step 3 - Lastly, after you verify the USB device can be seen by the ESXi host, you can of course browse and access your USB device by looking under /vmfs/volumes/

Te re-enable pass-through of USB devices to your VMs, you just need to start the usbarbitrator service.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Not Supported Tagged With: esxi 5, esxi4.1, lsusb, usb

The Missing Piece In Creating Your Own Ghetto vSEL Cloud

10/31/2011 by William Lam 21 Comments

Awhile back I discovered an undocumented flag called "esxvm" in the SQL statements of the new vCloud Director 1.5 installer that suggested the possibility of deploying nested ESXi hosts in vCD. However, after further investigation the flag only enables the automated deployment of an ESXi 5 parameter (vhv.allow) which is required to run nested ESXi 4.x/5.x hosts as part of preparing a new ESXi 5 hosts in vCloud Director. There was still a missing piece to the puzzle to enable this functionality within vCloud Director user interface.

The answer eventually came from attending a recent session at VMworld 2011 in Las Vegas CIM1436 - Virtual SE Lab (vSEL) Building the VMware Hybrid Cloud by Ford Donald of VMware. I will not go into detail about what vSEL is, if you would like more information take a look at this blog post The Demo Cloud at VMworld Copenhagen or check out Ford's VMworld presentation online. In one of Ford's slides, he describes the necessary steps to enable nested ESXi called ESX_VM mode in vCloud Director which actually consists of two parts:

  • Enable nested virtualization and 64-bit vVM support in vSphere 5
  • Enable special mode in vCloud Director called ESX_VM to allow for vSphere 4 and 5 hosts as valid guestOS types

There are also some additional steps that are required after enabling ESX_VM mode:

  • Preparing or re-preparing ESXi 5 hosts
  • Allowing for Promiscuous Mode in vCD-NI or VLAN-backed Network Pool

********************* DISCLAIMER *********************
This is not a supported configuration by VMware and this can disappear at any time, use at your own risk 

********************* DISCLAIMER *********************

Note: I will assume the reader has a good understanding of how to install/configure vCloud Director and how it works. I will not be going into any details in configuring or installing vCD, you can find plenty of resources on the web including here, here, here and here. I will also assume you understand how to configure vCD-NI and VLAN-backed network pools in vCloud Director and how they work.

The first part is to enable nested virtualization (nested ESXi) support within the ESXi 5 hosts when they're being prepared by vCloud Director by updating the following SQL statement as noted in my earlier blog post Cool Undocumented Features in vCloud Director 1.5:

UPDATE config SET value='true' WHERE name='extension.esxvm.enabled';

The second part is to update the vCloud Director database to add support for both vSphere 4 and 5 hosts as valid guestOS types:

INSERT INTO guest_osfamily (family,family_id) VALUES ('VMware ESX/ESXi',6);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal,'ESXi 4.x', 'vmkernelGuest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 40);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal, 'ESXi 5.x', 'vmkernel5Guest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 50);

To apply these SQL statements to your vCloud Director 1.5 database, you will need to login to either to your Oracle or SQL Server database and manually execute these statements using the account that you originally created.

Here is an example of executing the SQL statements on an Oracle Express 11g database (Oracle Express is not officially supported by VMware):

As you can see, we need we first create a new guest_osfamily type called "VMware ESX/ESXi" and we need to also provide a unique family_id, which from a default installation of vCloud Director 1.5, the next available value will be 6. Next, we need to create the two new guestos_type "ESXi 4.x" and "ESXi 5.x" and again we need to provide a unique guestos_id which from a default installation of vCloud Director 1.5, the next available values will be 81 and 82. If any errors are thrown regarding a constraint being violated, then the ids may already have been used, you can always query to see what the next value is or select a new id.

Once you have executed the SQL statements, you will need to restart the vCloud Director Cell for the changes to take effect and if you already have prepared ESXi 5 hosts, you will need to re-prepare the hosts.

If you prefer not to manually do this, you can take a look at my blog post Automating vCloud Director 1.5 & Oracle DB Installation which has been updated to allow you to enable ESX_VM mode with your vCloud Director 1.5 installation. There is a new flag in the vcd.rsp file called ENABLE_NESTED_ESX which can be toggled to true/false which will automatically perform the SQL statements as part of the post-installation of vCloud Director 1.5 and restart the vCD Cell for you.

Here is a screenshot if you decide to enable this flag:

Finally, the last configuration tweak is to enable both promiscuous mode and forged transmit in either your vCD-NI or VLAN-backed Network Pool which is a requirement to run nested ESXi hosts. You locate the name of your network pool to identify distributed portgroup.

Next you can either use the vCD API or login to your vCenter Server and enable the promiscuous mode for that specific distributed portgroup.

UPDATE: Thanks to @DasNing - You can also enable promiscuous mode by executing the following SQL query: UPDATE network_pool SET promiscuous_mode='1' WHERE name=';

We are finally done with all the configurations!

If you successfully completed the above, when you go and create a new virtual machine in vCloud Director, you should now have a new Operation System Family called "VMware ESX/ESXi"

Within this new OS family, you can now provision a new ESXi 4.x or ESXi 5.x guestOS

Here is an example of my own vGhettoPod which includes vMA5 and vESXi 5 host which I can use to perform various types of testing in my home lab.

Now you can create your own ghetto vSEL cloud using VMware vSphere 5, vCloud Director 1.5 and vShield 5!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, Nested Virtualization, Not Supported, Uncategorized Tagged With: esxi5, esxvm, nested, vcd, vcloud director, vsel, vSphere 5

How to Create and Modify vgz (vmtar) Files on ESXi 3.x/4.x

08/09/2011 by William Lam 4 Comments

There were several questions today on the VMTN community forums with regards to manipulating .vgz files in ESXi, also known as vmtar files. Due to the sparse amount of information on the web, I wanted to document some of the common operations that can be performed on the vmtar files. I will not be going over the use cases for manipulating or creating custom vmtar files, but here is one use case.

UPDATE (10/16/18) - For ESXi 6.5+, please use the following commands and the example below is using the s.v00 file:

Decompress file:

gunzip < s.v00 > s.v00.xz
xz --single-stream --decompress < s.v00.xz > s.v00.vtar
vmtar -v -x s.v00.vtar -o s.v00.tar
tar -xvf s.v00.tar

Compress file:

tar -cvf s.v00-new.tar bin/ etc/ lib/ lib64/ opt/ usr/ var/
vmtar -v -c s.v00.tar -o s.v00.vtar
xz --single-stream --compress < s.v00.vtar > s.v00.xz
xz --single-stream --compress < s.v00.vtar > s.v00

You can find some of these vmtar files with .vgz extension in the ESXi installation iso, here are a few highlighted in red:

To operate on existing vmtar files, you will need access to an ESXi host via ESXi Shell and using the /sbin/vmtar utility.

Usage: vmtar {[-x vtar/vgz-file] [-c tar/tgz-file] [-v] -o destination} | -t < vtar/vgz-file

In this example, we will copy the install.vgz to an ESXi host to perform some operations.

To list the contents of a vmtar file, you will need to use the -t option:

To extract the contents of vmtar file, you will need to use the -x and -o option:

vmtar -x install.vgz -o install.tar

Note: The output will be a standard tar file which will then need to be extracted before getting to the actual contents

To extract the tar file, we will be using the tar utility:

Let's say we made a change to one of the files and now we would like to re-create the vmtar file, we will first need to tar up the contents by using the tar utility again:

To verify the contents were all tarred up, we can view the contents by using the following command:

tar -tf install.tar

Now we will create the vmtar file using the vmtar utility:

vmtar -c install.tar -o install.vgz

We can confirm the contents by using vmtar -t option once again:

vmtar -t < install.vgz

If you decide to create your own custom vmtar files and want to verify the file layout, you can use vmkramdisk to assist you. Using vdf command, make note of the number of tardisks that have been mounted up.

Also make note of the filesystem layout by performing an "ls" on / (slash):

Now let's say you wanted to create a directory called virtuallyGhetto with a file in that directory called foobar and you wanted it to be mounted up under /

Here are the steps to perform the above:

Do you notice anything different? How about performing an "ls" on / (slash) again?

To umount the vmtar disk, you would use the following command:

vmkramdisk -u virtuallyGhetto.vgz

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, Not Supported Tagged With: esxi 6.5, esxi 6.7, esxi4.1, vgz, vmtar, vSphere 4.1

How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5

07/31/2011 by William Lam 5 Comments

The ability to enable virtual Fault Tolerance in nested virtual machines running in vESX(i) is not a new feature in vSphere 5, vFT has been an unsupported feature since vSphere 4 and was initially identified by Simon Gallagher. The process is exactly the same in vSphere 5 in which three virtual machine configuration options need to be configured for the virtual machine to be enabled with FT, not the vESXi VM.

replay.supported = "true"
replay.allowFT = "true"
replay.allowBTOnly = "true"

During the beta of vSphere 5, I did enable vFT but on an offline virtual machine to conserve on unnecessary compute resources. Today there was a question on the beta community around configuring vFT for vSphere 5 and I wanted to quickly validate the configurations still hold true. I ran into a interesting error when trying to enable vFT, the power on process for the secondary virtual machine failed with the following error:

This was not an error I had seen before in vSphere 4 and looking at the vmkernel and vmware.log files, I noticed the following:

2011-07-31T17:31:39.314Z| vcpu-0| [vob.vmotion.stream.keepalive.read.fail] vMotion migration [ac1e0050:1312133702562144] failed to read stream keepalive: Connection closed by remote host, possibly due to timeout
2011-07-31T17:31:39.314Z| vcpu-0| [msg.checkpoint.precopyfailure] Migration to host <> failed with error Connection closed by remote host, possibly due to timeout (0xbad003f).
2011-07-31T17:31:39.324Z| vcpu-0| Migrate: secondary failure during migration: error Connection closed by remote host, possibly due to timeout.

I tried changing the advanced option on the vESX(i) host to increase the vMotion timeout but continued to hit the same error. I decided to look more into the first error message "failed to read stream keepalive" and found an advanced ESX(i) setting called /Migrate/VMotionStreamDisable, this advanced option has been available since ESX(i) 4.x.

I decided to disable vMotion Stream and to my surprised, it allowed FT to power on the secondary virtual machine and no longer ran into that error.

Note: You may or may not run into this error message and the configuration may not be necessary. If you enable vFT on an offline VM, you should not have any issues as long as you meet the minimum Fault Tolerance requirements.

You can configure the advanced ESXi option using either esxcli or legacy esxcfg-advcfg commands:

esxcli system settings advanced set -o /Migrate/VMotionStreamDisable -i 0
esxcfg-advcfg -s 0 /Migrate/VMotionStreamDisable

It is important to understand that even though one can setup a vESX(i) hosts and test and play with some of the advanced functionality such as vMotion and FT that the actual behavior is unpredictable as these configurations are unsupported by VMware. This of course is also great feature for home labs and studying for VMware certifications such as VCP and VCAP-DCA, but that should be the extent of leveraging these unsupported configurations.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, Not Supported Tagged With: esxi5, fault tolerance, nested ft, vft, vSphere 5

How to Add a Splash of Remote Color to ESXi Shell

07/23/2011 by William Lam 6 Comments

This morning I noticed a very interesting retweet by fellow vExpert Wil van Antwerpen from another vExpert: Richard Cardona (You may know him as rcardona2k on the VMTN Community Forums) about a neat little trick with the use of remote ESXi Shell (previous known as remote TSM).

For those of you who login remotely via SSH to the ESXi Shell (previously known as unsupported mode and Tech Support Mode) know that you can run the DCUI utility remotely by just typing "dcui". The remote DCUI works just like it does using the direct console, with the exception of displaying the famous yellow and black screen that we are familiar with.

Richard came upon a neat little trick by setting the terminal type to "linux" from the default "xterm" that the yellow and black can be enabled when using the remote DCUI.

Before launching DCUI utility, you will need to run the following command on the ESXi Shell:

export TERM=linux

Next you will just type "dcui" and hit enter

Here is an example of running remote DCUI in color on ESXi 5

Here is an example of running remote DCUI in color on ESXi 4.1

Note: As you can see this is not a new trick in vSphere 5, but has been there since 4.x days but one big change with vSphere 5 is the full resolution of DCUI which many have complained about in the past.

If you are interested in other ways of customizing the DCUI, take a look at this blog post How to add a splash of color to ESXi DCUI Welcome Screen

Don't forget to play some cool soundtrack music when using the DCUI 😉

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Not Supported Tagged With: dcui, esxi4, esxi5, vSphere 4, vSphere 5

How to Format and Create VMFS5 Volume using the CLI in ESXi 5

07/19/2011 by William Lam 38 Comments

VMware always recommends formatting and creating a new VMFS volume using the vSphere Client as it automatically aligns your VMFS volume. However, if you do not have access to the vSphere Client or you wanted to format additional VMFS volumes via a kickstart, you can do so using the CLI and the partedUtil under /sbin.

~ # /sbin/partedUtil
Not enough arguments
Usage:
Get Partitions : get
Set Partitions : set ["partNum startSector endSector type attr"]*
Delete Partition : delete Resize Partition : resize
Get Partitions : getptbl
Set Partitions : setptbl

With ESXi 5, an MBR (Master Boot Record) partition table is no longer used and has been replaced with a GPT (GUID Partition Table) partition table. There is also only one block size of 1MB versus the 2,4 and 8 that was available in ESX(i) 4.x

We can view the partitions of a device by using the "getptbl" option and ensure we don't have an existing VMFS volume:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760

Next we will need to create a partition by using the "setptbl" option:

/sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"

The "setptbl" accepts 3 arguments:

  • diskName
  • label
  • partitionNumber startSector endSector type/GUID attribute

The diskName in this example is the full path to the device which is /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0

The label will be gpt

The last argument is actually a string comprised of 5 individual parameters:

  • partitionNumber - Pretty straight forward
  • startSector - This will always be 2048 for 1MB alignment for VMFS5
  • endSector - This will need to be calculated based on size of your device
  • type/GUID - This is the GUID key for a particular partition type, for VMFS it will always be AA31E02A400F11DB9590000C2911D1B8

To view all GUID types, you can use the "showGuids" option:

~ # /sbin/partedUtil showGuids
Partition Type       GUID
vmfs                 AA31E02A400F11DB9590000C2911D1B8
vmkDiagnostic        9D27538040AD11DBBF97000C2911D1B8
VMware Reserved      9198EFFC31C011DB8F78000C2911D1B8
Basic Data           EBD0A0A2B9E5443387C068B6B72699C7
Linux Swap           0657FD6DA4AB43C484E50933C84B4F4F
Linux Lvm            E6D6D379F50744C2A23C238F2A3DF928
Linux Raid           A19D880F05FC4D3BA006743F0F84911E
Efi System           C12A7328F81F11D2BA4B00A0C93EC93B
Microsoft Reserved   E3C9E3160B5C4DB8817DF92DF00215AE
Unused Entry         00000000000000000000000000000000

Once you have the 3 arguments specified, we can now create the partition:

~ # /sbin/partedUtil "setptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0" "gpt" "1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 0

UPDATE (01/15) - Here is a quick shell snippet that you can use to automatically calculate End Sector as well as creating the VMFS5 volume:

partedUtil mklabel ${DEVICE} msdos
END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk '{print $1 " \\* " $2 " \\* " $3}') - 1)
/sbin/partedUtil "setptbl" "${DEVICE}" "gpt" "1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0"
/sbin/vmkfstools -C vmfs5 -b 1m -S $(hostname -s)-local-datastore ${DEVICE}:1

Note: You can also use the above to create a VMFS-based datastore running on a USB device, however that is not officially supported by VMware and performance with USB-based device will vary depending on the hardware and the speed of the USB connection. 

We can verify by running the "getptbl" option on the device that we formatted:

~ # /sbin/partedUtil "getptbl" "/vmfs/devices/disks/mpx.vmhba1:C0:T2:L0"
gpt
652 255 63 10485760
1 2048 10474379 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Finally we will now create the VMFS volume using our favorite vmkfstools, the syntax is the same as previous release of ESX(i):

~ # /sbin/vmkfstools -C vmfs5 -b 1m -S himalaya-SSD-storage-3 /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0:1
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba1:C0:T2:L0:1" with blockSize 1048576 and volume label "himalaya-SSD-storage-3".
Successfully created new volume: 4dfdb7b0-8c0dcdb5-e574-0050568f0111

Now you can refresh the vSphere Client or run vim-cmd hostsvc/datastore/refresh to view the new datastore that was created.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi Tagged With: esxi5, gpt, partedUtil, usb, vmfs, vSphere 5

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 44
  • Go to page 45
  • Go to page 46
  • Go to page 47
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy