• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

esxi 6.0

UEFI PXE boot is possible in ESXi 6.0

10/09/2015 by William Lam 20 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

Step 1 - Download and install CentOS 6.7 Minimal (64-Bit)

Step 2 - Login to the CentOS system via terminal and perform the following commands which will update the system and install the DHCP and TFTP services:

yum -y update
yum -y install dhcp tftp-server

Step 3 - Download and upload an ESXi 6.x ISO to the CentOS system. In example here, I am using latest ESXi 6.0 Update 1 image (VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso).

Step 4 - Extract the contents of the ESXi ISO to the TFTP directory by running the following commands:

mount -o loop VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso /mnt/
cp -rf /mnt/ /var/lib/tftpboot/esxi60u1
umount /mnt/
rm VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso

Step 5 - Copy the custom ESXi bootx64.efi bootloader image to the root of the extracted ESXi directory by running the following command:

cp /var/lib/tftpboot/esxi60u1/efi/boot/bootx64.efi /var/lib/tftpboot/esxi60u1/mboot.efi

Step 6 - Next, we need to edit our DHCP configuration file /etc/dhcp/dhcpd.conf to point our hosts to the mboot.efi image. Below is an example configuration and you will need to replace it with the network configuration of your environment. If you are running the TFTP server on another system, you will need to change the next-server property to the address of that system else you will just specify the same IP Address as the DHCP server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
log-facility local7;
allow booting;
allow bootp;
option client-system-arch code 93 = unsigned integer 16;
 
class "pxeclients" {
   match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
   # specifies the TFTP Server
   next-server 192.168.1.180;
   if option client-system-arch = 00:07 or option client-system-arch = 00:09 {
      # PXE over EFI firmware
      filename = "esxi60u1/mboot.efi";
   } else {
      # PXE over BIOS firmware
      filename = "esxi60u1/pxelinux.0";
   }
}
 
subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "primp-industries.com";
    option domain-name-servers 192.168.1.1;
    host vesxi60u1 {
        hardware ethernet 00:50:56:ad:f7:4b;
        fixed-address 192.168.1.199;
    }
}

Step 7 - Next, we will need to edit our TFTP configuration file /etc/xinetd.d/tftp to enable the TFTP service by modifying the following line from yes to no:

disable = no

Step 8 - By default, the ESXi's boot.cfg configuration file refers to all packages under / path. We will need to remove that reference and can easily do so by running the following command:

sed -i 's/\///g' /var/lib/tftpboot/esxi60u1/boot.cfg

Step 9 - Finally, we need to restart both the TFTP (under xinetd) and DHCP services. For testing purposes, I have also disabled firewall for ipv4/ipv6, of course in a real production environment you will probably want to only open the ports required for TFTP/DHCP.

/etc/init.d/xinetd restart
/etc/init.d/dhcpd restart
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop

We can now boot up either a physical host that is configured to use UEFI firmware OR we can also easily test using Nested ESXi. The only change we need to make to our ESXi VM is by setting the firmware mode from BIOS to EFI which can be done using the vSphere Web/C# Client as shown in the two screenshots below:

uefi-pxe-boot-esxi-6.0-0 uefi-pxe-boot-esxi-6.0-1
If everything was successfully configured, we should now see our system PXE boot into ESXi installer using UEFI as seen in the screenshot below.

uefi-pxe-boot-esxi-6.0-2
If you run into any issues, I would recommend checking system logs on your PXE server (/var/log/messages) to see if there are any errors. You can also troubleshoot by manually using tftp client and connecting to your TFTP Server to ensure you are able to pull down the files such as the boot.cfg by running the following command:

tftp [PXE-SERVER]
get esxi60u1/boot.cfg

For additional resources on scripted installation of ESXi also referred to as Kickstart, be sure to take a look here. I also would like to give a big shoutout and thanks to Tim Mann, one of the Engineers responsible for adding UEFI support into ESXi and for answering some of my questions while I was setting up my environment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, vSphere 6.0 Tagged With: bios, boot.cfg, bootx64.efi, dhcp, efi, esxi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

Override default VSAN Maintenance (decommission) Mode in VSAN 6.1

09/14/2015 by William Lam Leave a Comment

Earlier this year, there was an interesting use case that was brought up from a customer regarding the use of vSphere Update Manager (VUM) and VSAN enabled ESXi hosts. Everything was working from a functional standpoint, but the customer wanted a way to control the default VSAN decommission mode which specifies how the data should be moved, if at all when a host is placed into maintenance mode. There are three supported options which includes Ensure Accessibility (default), Evacuate All Data and No Action. Depending on the customer and their use case, there may be valid reasons to use one or the other. For example, if I am shutting down my entire VSAN cluster for some hardware upgrade, I probably do not want any of my data to be migrated and the No Action setting would be acceptable. During an upgrade or patching an of ESXi host, some customers have expressed that they would prefer to leverage the Evacuate All Data setting which is perfectly fine, of course the maintenance mode would take long as all the dat must be migrated off the host first.

Prior to VSAN 6.1 (included in the vSphere 6.0 Update 1 release), it was not possible to override the default VSAN maintenance mode (decommission mode) option which defaults to Ensure Accessibility. This was a problem because if you decided you wanted to use a different option, there would be some manual intervention required from the user when using VUM. The workaround for the customer would be to either manually or using the vSphere API to automate the ESXi host maintenance mode operation and specify the decommission mode type before VUM would take over and update the host. Not an ideal solution but would work if you needed to override the default.

I thought this would be a nice feature enhancement to be able to override the default VSAN maintenance mode option which could vary from customer to customer depending on their use case. I got in touch with one of the VSAN Engineers to discuss the use case in more detail and he agreed that it would be useful to expose this type of a capability. In VSAN 6.1, there is now a new ESXi Advanced Setting called DefaultHostDecommissionMode which allows you to specify the default VSAN maintenance mode behavior.

vsan-6.1-decomission-mode-0
Below is a table of the three available options (ensureAccessibility is default) that can be configured:

VSAN Decommission Mode Value  Description
ensureAccessibility  VSAN data reconfiguration should be performed to ensure storage object accessibility
evacuateAllData  VSAN data evacuation should be performed such that all storage object data is removed from the host
noAction  No special action should take place regarding VSAN data

This ESXi Advanced Setting can also be retrieved and configured using ESXCLI as well as the vSphere API.

To retrieve the current VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced list -o /VSAN/DefaultHostDecommissionMode

To configure the default VSAN maintenance mode option using ESXCLI, run the following command:

esxcli system settings advanced set -o /VSAN/DefaultHostDecommissionMode -s [DECOMISSION_MODE]

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXCLI, ESXi, VSAN, vSphere 6.0 Tagged With: DefaultHostDecommissionMode, esxi 6.0, maintenance mode, Virtual SAN, VSAN, VSAN 6.1, vSphere 6.0 Update 1

ESXi 6.0 on Apple Xserve 2,1

08/20/2015 by William Lam 7 Comments

I really enjoy hearing from my readers, especially when they share some of the unique challenges they have come across or boundaries they have pushed with our VMware software. Several weeks back I received an interesting email from a reader named John Clendenen who was able to get ESXi 6.0 running on both his Apple Xserve 3,1 as well as Xserve 2,1! To be honest, I was pretty surprised to hear that this worked and not only that, there was not a whole lot that John had to do to get it working. I thought this was a pretty cool setup and asked if John was interested in sharing more details with the VMware/Mac OS X community in case others were interested.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

*** This is a guest blog post from John Clendenen ***

For the past 5 years, I have lived in New York where I work at various print and post-production studios. The IT situations in many of these locations is often home-grown and sub-adequate, if not barely functional, so I took it upon myself to learn OS X Server administration. I have a background in computer science and networking, so it wasn’t a huge leap to step into administration. I was able to gradually build one studio’s IT infrastructure form a single AFP share with incessant permissions problems, to an Open Directory driven, single sign-on infrastructure with mobile homes, messaging etc.. However, with each OS X Server update, things broke, and I was spending a lot of time putting out fires.

This led me to pursue virtualization in hopes of isolating services on separate machines, without telling the studio to go buy 8 Apple Mac Minis. However, I needed to learn how to do this, before pitching it to them, so I looked on Craigslist for some cheap hardware. I found an Xserve 2,1, which I was able to talk down to $100. I also found a brief post in a thread that said that the Xserve 2,1 ran ESXi 5.5 without issue. I figured I’d take the plunge and just resell it on eBay if it didn’t work out.

Since then, my home lab has grown considerably, and I have learned that the best way to provide Mac services, is simply to use Linux (I’ve had a great experience with Netatalk). That said, Open Directory, Software Update, Caching and a few other services still need to be run on a Mac, so it’s still necessary to have the hardware. For a while, I had 3 Xserves, all purchased very cheaply, running VMs. I just sold one, and will sell another in the next month or two in favor of some Supermicro hardware (I really am mostly running Linux VM’s at this point). I’ll keep the one to run a few Mac OS X VMs. I’m still working on getting Samba 4 to work as the PDC, but once that is running smoothly, I’ll have a functional environment that I can take to the studios where I work (with approved hardware of course). Then I’ll have an improved work experience, while also pulling some extra income on the installation/maintenance.

Anyway, you’ve come here to read about running ESXi 6.0 on an Xserve 2,1. Well, it works. It’s 7 years old, and not officially supported, but you already know that. So, if you were considering it, I doubt anything here will dissuade you. On top of that, there’s not much for me to say other than, it works. No tricks to it, just install and you’re off.

pic1
That said, I do have some recommendations and tips to help you if you want to get the most out of this hardware. 2 months ago, since I swapped out the RAID card for the standard backplane/interconnect and upgraded the RAM. It hasn’t skipped a beat since.

My System

This is my home custom 13u rack with sound insulation and active air flow. Networking is in the back (Ubiquiti). It sits in a lofted storage area with an A/C and vents into the bathroom.

pic8
Here you see an Xserve 3,1 on top of an Xserve 2,1. There’s a blank unit because I just sold an Xserve 2,1 on eBay, and the other 2,1 will be for sale soon as well to make room for a 4 node 2U Supermicro Server. The NAS is comprised of a 4u head unit and a 4U JBOD running Centos. Last of course, is a 2U CyberPower UPS which is really just to condition the power and keep brownouts from taking down the system.

I have about a dozen VM’s running between the 2 Xserves. I have each Mac OS X service separated on it’s own installation. This way I can run updates per service. It’s especially nice to have Open Directory separate from less important services. I also have Debian, Centos and OpenBSD VMs running various services. Getting the Xserve 3,1 running ESXi 6 is possible, but more problematic. Now that I have it working, I’m dropping the 2,1’s simply because the processors aren’t multithreaded. I am currently working on a companion article to this one, detailing my experience with the Xserve 3,1, so that information will be available soon.

Storage

1. Don’t use the RAID backplane/interconnect or count on using it. ESXi 6 does not recognize it, and RDM appears to work at first and then will crash your VM and never show up again. You can have it installed in the Xserve without any issue, but you’ll get a lot more mileage out of the hardware if you have the standard backplane/interconnect.

The backplane you want appears periodically on eBay. The part number is: 661-4648

2. If/once you have the non-RAID backplane/interconnect, keep in mind that it is SATAII and will only support 3Gb/s. I am using 3 old 500Gb WD RE3’s, but I’d recommend using some older SSDs that will max out the SATAII interface without paying for speed you can’t use. Be sure you consult the Apple Drive Module compatibility page to make sure you have the right drive caddies. They all fit, but they don’t all work.

Apple Drive Module Compatibility Chart: https://support.apple.com/en-us/HT1219

pic2
3. PCIE Flash is a good idea, whether you use it for cache, VM’s or as the ESXi boot disk, it is by far the fastest storage option. I have not invested in it, but the good people at Mushkin have told me their Scorpion PCIE flash will work in ESXi 6. Please contact them yourself though before investing in one of their cards. I have not tested them. While this will give you the best performance out of your Xserve 2,1, it seems like overkill to me, but hey, if you want to push this thing, you might find some meaningful performance gains here.

Mushkin Scorpion PCIE Flash: http://www.poweredbymushkin.com/index.php/products/solid-state-drives.html

4. It might occur to you that replacing the optical drive with an ssd might be a good idea. While MCE Technologies makes “Optibay” for Xserve 2,1, the connection is IDE, so this is not recommended. I also don’t know if ESXi would recognize it. My gut says probably, but again, it’s too slow to be useful. It isn’t that cheap either.

MCE Technologies Optibay: http://store.mcetech.com/mm/merchant.mvc?Store_code=MTOS&Screen=PROD&Product_Code=OBSXGB-XS

Network

There are a lot of options here. You can plug any network card that you want really as long as you can at least find an anecdotal account of it working on ESXi 6.

I have one such anecdote for you, and it is also a strong suggestion. The company Silicom makes/made several models of gigabit NICs which are all incredibly inexpensive, are all over eBay and work in ESXi 6. Buy the 6 port model. It’s cheap and you’ll get great scaling with ESXi load balancing across 8 gigabit ports.

The Xserve 2,1 has the added advantage here of an optional PCIX riser. If your model has one, or you find one for cheap, you can save even more on the Silicom NIC. The PCIE models go for $60-$80 on eBay, while the PCIX models go for $40.

pic3

Memory

ECC DDR2 is pretty cheap and easy to find used. I recommend memory4less though. I had to return some RAM that was mislabeled from a random eBay distributor. Memory4less will get it right. http://www.memory4less.com/

pic4

Processors

One great perk of the Xserve 2,1, is that you can upgrade the single processor hardware to dual processor. You can pick up an additional processor on eBay for $40 or so, but you’ll need to get a heat sink as well. The single processor units come with a fake aluminum heat sink, but do not use it. You want a copper one. I believe the heat sinks in the Xserve 1,1 are the same. Don’t forget the thermal paste.

pic5

Minor Issues

1. The Performance tab throws an error.

pic6
2. Not sure about the hardware sensors, but it looks like not everything is working even if it’s showing up. I did not do any testing here.

pic7
Stay tune for Part II of John's guest blog post on running ESXi 6.0 on Xserve 3,1.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, Not Supported, vSphere 6.0 Tagged With: apple, esxi 6.0, notsupported, osx, xserve

How to VMFork aka Instant Clone Nested ESXi?

08/03/2015 by William Lam 15 Comments

vmfork-aka-instant-clone
The VMware Fling's team recently released an update to the existing PowerCLI Extensions which now exposes the new VMFork aka Instant Clone capability that was introduced in vSphere 6.0. The Fling contains a set of PowerCLI Extension Modules which in turn provides new PowerCLI cmdlets for accessing the Instant Clone feature. The idea behind the Fling is to help VMware understand how customers would like to consume the Instant Clone feature not only from a CLI point of view but also from an API and UI standpoint. Prior to this, Instant Clone was only available through the use of either Horizon View or the Big Data Extensions product. I think this is a great opportunity for customers and partners to help shape how Instant Clone should be consumed more generally.

One of the use cases I had in my mind when I had first heard about the Instant Clone feature was to be able to quickly instantiate new Nested ESXi VMs. When I got the opportunity to help test out early prototypes of the Instant Clone cmdlet to help provide feedback and usability improvements, I knew I had to give Nested ESXi a try!

Requirements:

  • Fresh installation of Nested ESXi 6.0 in VM (unconfigured)
  • PowerCLI 6.0 Release 1
  • Instant Clone PowerCLI Extensions Fling
  • Nested ESXi 6.0 Instant Clone Scripts

High level process:

  1. A "preparation" script will be manually uploaded & executed within the Nested ESXi VM (Parent VM) to prep the system for Instant Cloning
  2. As the Parent VM is quiesce, both the pre/post customization script will be uploaded to the Parent VM automatically. The "pre-customization" is also then executed within the Parent VM which properly setups the library path to the VMware Tools binary (applicable to ESXi 6.0 only) and is then placed in a ready state for creating Instant Clones
  3. As new Instant Clone (Child VMs) are spun up, the "post-customization" script is automatically executed to add additional configurations and most importantly ensure newly created Instant Cloned Nested ESXi VMs have unique network identities

Note: For Instant Cloning regular OSes, only step 2 and 3 are really needed. Due to a known issue with VMware Tools for Nested ESXi, I have found that it is easier to prepare the Nested ESXi VM prior to quiescing and creating Instant Clones from the Parent VM.

Instructions:

Step 1 - Download and install both PowerCLI 6.0 Release 1 & Instant Clone PowerCLI Extensions Fling.

Step 2 - Perform a fresh Nested ESXi 6.0 installation in a VM, do not configure additional settings outside of enabling ESXi Shell and SSH.

Step 3 - Download the Nested ESXi 6.0 Instant Clone Scripts which contains the following four files:

  • prep-esxi60.sh - Prepares the Nested ESXi VM and ensures that new Child VMs will not retain the Parent VM's MAC Address which is baked in several places
  • pre-esxi60.sh - Pre-customization script which is used to properly setup the library paths to use the VMware Tools daemon to retrieve guest properties from PowerCLI script
  • post-esxi60.sh - Post-customization script which is used to apply networking configuration and hostnames for example
  • vmfork-esxi60.ps1 - An example PowerCLI script which issues the Instant Clone cmdlets

Note: For out of the box use, the only script that needs to be modified is the PowerCLI "vmfork-esxi60.ps1" script, the rest of the scripts should work or require very little to no modifications assuming you have followed the instruction thus far.

Step 4 - Upload the prep-esxi60.sh to Nested ESXi 6.0 VM (Parent VM) and then execute it using either the ESXi Shell over SSH or through a VMRC session. If you use SSH, you will notice that the script hangs, that is because the VMkernel interface is deleted as part of the script.

Step 5 - Next, we need to make a few edits to the vmfork-esxi60.ps1 script to update the name of your ESXi VM, along with its credentials and the full path to both the pre and post customization scripts. Below is an example of the variables that you will need to edit:

1
2
3
4
5
$parentvm = 'vESXi6'
$parentvm_username = 'root'
$parentvm_password = 'vmware123'
$precust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\pre-esxi60.sh'
$postcust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\post-esxi60.sh'

The section shown below will also need to be edited which contains the customization properties which are then passed down to the guestOS for configuration as part of the Instant Clone process.

1
2
3
4
5
6
$configSettings = @{
'hostname' = "$vmname.primp-industries";
'ipaddress' = "192.168.1.$_";
'netmask' = '255.255.255.0';
'gateway' = '192.168.1.1';
}

Step 6 - Lastly, it is time to run the script by issuing the following command:

.\vmfork-esxi60.ps1

instant-clone-nested-esxi-0
If everything was successful, you should see a couple of new powered on Instant Cloned Nested ESXi VMs that have been fully customized and ready for use!

instant-clone-nested-esxi-1
Note: There have been a couple of times where newly Instant Clone VMs have not been properly customized and when looking in the Instant Clone logs under /var/tmp/quiesce.log you may find "Unable to fork" error message. I usually have to re-quiesce the Parent VM which I do so by reverting back to a snapshot that captures the state after Step 4. Once I re-run the PowerCLI script, I am able to successfully deploy N-Number of Instant Clone Nested ESXi VMs. For additional best practices and tips/tricks, be sure to check out this blog post here.

Big thanks to Jim Mattson for some of his earlier research and work on this topic which made implementing these scripts much easier.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, Nested Virtualization, vSphere 6.0 Tagged With: esxi 6.0, fling, instant clone, nested, nested virtualization, PowerCLI, vmfork

Heads up: ESXi 5.x & 6.0 unable to detect newer Apple Mac Pro 6,1 local SSD Device

07/27/2015 by William Lam 12 Comments

Over the last couple of weeks there have been several reports coming in from customers that the local SSD device found in newer Apple Mac Pro 6,1 were no longer being detected by ESXi. Starting with ESXi 5.5 Patch03 and ESXi 6.0, the Apple Mac Pro 6,1 was officially supported but it looks like the latest versions of the Mac Pro 6,1 that are being shipped contain  a slightly different local SSD device which is not recognized by ESXi.

This was not the first time that this has happened, when the 2014 Mac Mini were first released, they too had a similar issue in which a custom VIB was required to get the internal device to get recognized by ESXi. There is an internal bug (PR 1487494) that is currently tracking the issue and if you are also experiencing this problem, please file an SR and have the GSS Engineer attach your case to this bug.

In the meantime, there is an unofficial workaround which was discovered by one of my readers (Mr. Spock) that by installing the community SATA-XACHI VIB over on Andreas Peetz VIB Depot site, both ESXi 5.5 and ESXi 6.0 will then recognize the local SSD Device. You will need to either use VMware Image Builder or Andreas ESXi Customizer tool to create a custom image if you decide to install ESXi directly on the local SSD device. I personally would recommend installing ESXi on USB device, this would allow you to install the VIB as a post-installation and not requiring a custom ESXi image.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, vSphere 5.5, vSphere 6.0 Tagged With: apple, esxi 5.5, esxi 6.0, mac, mac pro

How to create custom ESXi boot menu to support multiple Kickstart files?

06/11/2015 by William Lam 23 Comments

I recently received a question from one of my readers who was looking to migrate from ESXi 4.1 to newer version and one of the challenges they faced was around their ESXi scripted installs, better known as ESXi Kickstart. Previously, they had relied on using a custom syslinux boot menu to be able to select a specific Kickstart configuration file that resided locally on a bootable ESXi Image (USB, ISO or CDROM) as a PXE/DHCP environment was not allowed in their environment. There was a small change to how ESXi boot files were reference between ESXi 4.x and ESXi 5.x/6.x and a new boot.cfg configuration is now used which I had written about here with respect to scripted installs when ESXi 5.0 was first released.

Luckily, even with these changes one can still use a custom menu with ESXi 5.x/6.x and be able to select a specific Kickstart configurations based on user input. Here is a screenshot example of a custom ESXi Image that I built providing three different install options that could be selected which would map to three different Kickstart configurations which can be either local to the boot media or can also be retrieved remotely.

bootable-esxi-image-with-multiple-kickstart-option
The first thing you should be aware of if you plan to boot the custom ESXi Image from local media such as USB, CDROM or ISO is that the path to the Kickstart file must be in all UPPER CASE which is mentioned in this VMware KB 1026373. The next caveat that I found in my testing is that if you plan to store the local Kickstart files inside of a directory within the ESXi Image, the name of the directory can not be too long. I would recommend using "ks" as "kickstart" apparently was too long.

After you have extracted the contents of an ESXi ISO which you have downloaded, you will want to create a root directory called "ks" which will contain the different Kickstart configuration files. Here is an example of what structure look like:

ks
├── ks1.cfg
├── ks2.cfg
└── ks3.cfg

Next, you will need to edit the isolinux.cfg file which comes by default within the ESXi ISO. This is where you will add the different Kickstart options that a user will be able to select from. In this first example, we will look at referencing the Kickstart files locally on the media which can be either USB or CDROM and you will need to ensure you specify the right boot option as shown here in the VMware documentation. The path to the Kickstart file needs to be appended to the line that contains boot.cfg reference and you must ensure you include "+++" at the end of that line.

Here is an example of referencing a Kickstart file that lives on a USB device under this path /ks/ks.cfg:

APPEND -c boot.cfg ks=usb:/KS/KS.CFG +++

Here is an example of my isolinux.cfg for the boot menu that I have shown above which provides three different options mapping to three different Kickstart configuration files:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
DEFAULT menu.c32
MENU TITLE vGhetto Custom ESXi 6.0 Boot Menu
NOHALT 1
PROMPT 0
TIMEOUT 80
LABEL Ghetto Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=cdrom:/KS/KS1.CFG +++
  MENU LABEL ^1 Ghetto Install
LABEL A bit More Ghetto Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=cdrom:/KS/KS2.CFG +++
  MENU LABEL ^2 A bit More Ghetto Install
LABEL Super Ghetto ESXi Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=cdrom:/KS/KS3.CFG +++
  MENU LABEL ^3 Super Ghetto ESXi Install
LABEL hddboot
  LOCALBOOT 0x80
  MENU LABEL ^Boot from local disk

As I mentioned earlier, the Kickstart configuration file can either be retrieved locally or it can also be retireved remotely using one of the following supported protocols: http, https, ftp & nfs as shown here in the VMware documentation.

Here is an example of isolinux.cfg for a boot menu which references both a local kickstart as well as one that remotely lives on a web server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
DEFAULT menu.c32
MENU TITLE vGhetto Custom ESXi 6.0 Boot Menu
NOHALT 1
PROMPT 0
TIMEOUT 80
LABEL Ghetto Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=cdrom:/KS/KS1.CFG +++
  MENU LABEL ^1 Ghetto Install
LABEL A bit More Ghetto Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=http://172.30.0.108/ks/ks2.cfg +++
  MENU LABEL ^2 A bit More Ghetto Install
LABEL Super Ghetto ESXi Install
  KERNEL mboot.c32
  APPEND -c boot.cfg ks=http://172.30.0.108/ks/ks3.cfg +++
  MENU LABEL ^3 Super Ghetto ESXi Install
LABEL hddboot
  LOCALBOOT 0x80
  MENU LABEL ^Boot from local disk

For additional ESXi Kickstart resources and example, be sure to check out my pages here.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, vSphere 5.5, vSphere 6.0 Tagged With: boot.cfg, esxi, esxi 5, esxi 5.5, esxi 6.0, kickstart, ks.cfg, pxelinux

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy