• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

tftp

Using vSphere Auto Deploy to Netboot ESXi onto Apple Mac Hardware

01/17/2017 by William Lam 4 Comments

Last week I published an article that demonstrated for the first time on how to netboot an ESXi installation onto Apple Mac Hardware. As you can imagine, this was very exciting news for our VMware/Apple customers, who historically have not had this capability before. Customers can now automate and install ESXi over the network onto their Apple Mac Hardware just like you would for other non-Apple hardware.

With the ability to boot ESXi over the network for Apple Mac Hardware, it is now also possible for customers to take advantage of the vSphere Auto Deploy feature. Auto Deploy allows customers to easily and quickly provision ESXi hosts at scale and integrates directly with vCenter Server to automatically join and apply specific defined host configuration policies. This is a great time to check out Auto Deploy, especially with all the new enhancements that were introduced in vSphere 6.5 like custom script bundles for example.

Below are the instructions on how to setup Auto Deploy to work with Apple Mac Hardware.

[Read more...] about Using vSphere Auto Deploy to Netboot ESXi onto Apple Mac Hardware

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Automation, ESXi, vSphere 6.0, vSphere 6.5 Tagged With: apple, auto deploy, BSDP, esxi 6.0, esxi 6.5, iPXE, mac mini, mac pro, snponly64.efi.vmw-hardwired, tftp, tramp

How to Netboot install ESXi onto Apple Mac Hardware?

01/13/2017 by William Lam 11 Comments

The ability to perform an ESXi Scripted Installation over the network has been a basic capability for non-Apple hardware customers since the initial release of classic ESX. However, for customers who run ESXi on Apple Mac Hardware (first introduced in vSphere 5.0), being able to remotely boot and install ESXi over the network has not been possible and customers could only dream of this capability which many of us have probably taken for granted.

Unlike traditional scripted network installations which commonly uses Preboot eXecution Environment (PXE), Apple Mac Hardware actually uses its own developed Boot Service Discover Protocol (BSDP) which ESXi and other OSses do not support. In addition, there are very few DHCP servers that even support BSDP (at least this may have been true 4 years ago when I had initially inquired about this topic). It was expected that if you were going to Netboot (equivalent of PXE/Kickstart in the Apple world) a server that you would be running a Mac OS X system. Even if you had set this up, a Netboot installation was wildly different from a traditional PXE installation and it would be pretty difficult to near impossible to get it working with an ESXi image. With no real viable solution over the years, it was believed that a Netboot installation of ESXi onto Mac Hardware just may not be possible.

tl;dr - If you are interested in the background to the eventual solution, continue reading. If not and you just want the goods, jump down a bit further. Though, I do think it is pretty interesting and worth getting the full context 🙂

[Read more...] about How to Netboot install ESXi onto Apple Mac Hardware?

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Automation, ESXi, vSphere 5.5, vSphere 6.0, vSphere 6.5 Tagged With: apple, BSDP, esxi 5.5, esxi 6.0, esxi 6.5, iPXE, kickstart, mac, mac mini, mac pro, mboot.efi, Netboot, snponly.efi, tftp

UEFI PXE boot is possible in ESXi 6.0

10/09/2015 by William Lam 20 Comments

A couple of days ago I received an interesting question from fellow colleague Paudie O'Riordan, who works over in our Storage and Availability Business Unit at VMware. He was helping a customer who was interested in PXE booting/installing ESXi using UEFI which is short for Unified Extensible Firmware Interface. Historically, we only had support for PXE booting/installing ESXi using the BIOS firmware. You also could boot an ESXi ISO using UEFI, but we did not have support for UEFI when it came to booting/installing ESXi over the network using PXE and other variants such as iPXE/gPXE.

For those of you who may not know, UEFI is meant to eventually replace the legacy BIOS firmware. There are many benefits with using UEFI over BIOS, a recent article that does a good job of explaining the differences can be found here. In doing some research and pinging a few of our ESXi experts internally, I found that UEFI PXE boot support is actually possible with ESXi 6.0. Not only is it possible to PXE boot/install ESXi 6.x using UEFI, but the changes in the EFI boot image are also backwards compatible, which means you could potentially PXE boot/install an older release of ESXi.

Note: Auto Deploy still requires legacy BIOS firmware, UEFI is not currently supported today. This is something we will be addressing in the future, so stay tuned.

Not having worked with ESXi and UEFI before, I thought this would be a great opportunity for me to give this a try in my homelab which would also allow me to document the process in case others were interested. For my PXE server, I am using CentOS 6.7 Minimal (64-Bit) which runs both the DHCP and TFTP services but you can use any distro that you are comfortable with.

Step 1 - Download and install CentOS 6.7 Minimal (64-Bit)

Step 2 - Login to the CentOS system via terminal and perform the following commands which will update the system and install the DHCP and TFTP services:

yum -y update
yum -y install dhcp tftp-server

Step 3 - Download and upload an ESXi 6.x ISO to the CentOS system. In example here, I am using latest ESXi 6.0 Update 1 image (VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso).

Step 4 - Extract the contents of the ESXi ISO to the TFTP directory by running the following commands:

mount -o loop VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso /mnt/
cp -rf /mnt/ /var/lib/tftpboot/esxi60u1
umount /mnt/
rm VMware-VMvisor-Installer-6.0.0.update01-3029758.x86_64.iso

Step 5 - Copy the custom ESXi bootx64.efi bootloader image to the root of the extracted ESXi directory by running the following command:

cp /var/lib/tftpboot/esxi60u1/efi/boot/bootx64.efi /var/lib/tftpboot/esxi60u1/mboot.efi

Step 6 - Next, we need to edit our DHCP configuration file /etc/dhcp/dhcpd.conf to point our hosts to the mboot.efi image. Below is an example configuration and you will need to replace it with the network configuration of your environment. If you are running the TFTP server on another system, you will need to change the next-server property to the address of that system else you will just specify the same IP Address as the DHCP server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
log-facility local7;
allow booting;
allow bootp;
option client-system-arch code 93 = unsigned integer 16;
 
class "pxeclients" {
   match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";
   # specifies the TFTP Server
   next-server 192.168.1.180;
   if option client-system-arch = 00:07 or option client-system-arch = 00:09 {
      # PXE over EFI firmware
      filename = "esxi60u1/mboot.efi";
   } else {
      # PXE over BIOS firmware
      filename = "esxi60u1/pxelinux.0";
   }
}
 
subnet 192.168.1.0 netmask 255.255.255.0 {
    option domain-name "primp-industries.com";
    option domain-name-servers 192.168.1.1;
    host vesxi60u1 {
        hardware ethernet 00:50:56:ad:f7:4b;
        fixed-address 192.168.1.199;
    }
}

Step 7 - Next, we will need to edit our TFTP configuration file /etc/xinetd.d/tftp to enable the TFTP service by modifying the following line from yes to no:

disable = no

Step 8 - By default, the ESXi's boot.cfg configuration file refers to all packages under / path. We will need to remove that reference and can easily do so by running the following command:

sed -i 's/\///g' /var/lib/tftpboot/esxi60u1/boot.cfg

Step 9 - Finally, we need to restart both the TFTP (under xinetd) and DHCP services. For testing purposes, I have also disabled firewall for ipv4/ipv6, of course in a real production environment you will probably want to only open the ports required for TFTP/DHCP.

/etc/init.d/xinetd restart
/etc/init.d/dhcpd restart
/etc/init.d/iptables stop
/etc/init.d/ip6tables stop

We can now boot up either a physical host that is configured to use UEFI firmware OR we can also easily test using Nested ESXi. The only change we need to make to our ESXi VM is by setting the firmware mode from BIOS to EFI which can be done using the vSphere Web/C# Client as shown in the two screenshots below:

uefi-pxe-boot-esxi-6.0-0 uefi-pxe-boot-esxi-6.0-1
If everything was successfully configured, we should now see our system PXE boot into ESXi installer using UEFI as seen in the screenshot below.

uefi-pxe-boot-esxi-6.0-2
If you run into any issues, I would recommend checking system logs on your PXE server (/var/log/messages) to see if there are any errors. You can also troubleshoot by manually using tftp client and connecting to your TFTP Server to ensure you are able to pull down the files such as the boot.cfg by running the following command:

tftp [PXE-SERVER]
get esxi60u1/boot.cfg

For additional resources on scripted installation of ESXi also referred to as Kickstart, be sure to take a look here. I also would like to give a big shoutout and thanks to Tim Mann, one of the Engineers responsible for adding UEFI support into ESXi and for answering some of my questions while I was setting up my environment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, vSphere 6.0 Tagged With: bios, boot.cfg, bootx64.efi, dhcp, efi, esxi 6.0, kickstart, mboot.efi, pxe boot, tftp, UEFI, vSphere 6.0

vGhetto Lab #NotSupported Slides Posted

10/17/2012 by William Lam Leave a Comment

As promised, here are slides to my #NotSupported session at VMworld Europe which I continued the theme of home labs with my vGhetto Lab #NotSupported presentation.

The idea behind the vGhetto Lab is to easily setup a vSphere home lab without too much effort and most importantly, leveraging as little resources as possible. This is all accomplished with the following:

  • Physical host running ESXi 5.x
  • ESXi 5.x offline depot image
  • VCSA 5.x (vCenter Server Appliance)

In addition to the above, you will also need to download the vGhetto Lab scripts which are shown in the video.
Here are some additional details on how to quickly get setup with your own vGhetto Lab.

Step 1 - After installing ESXi 5.x on your physical host, you will need to deploy the VCSA. Make sure you add a second network interface to VCSA as shown in the presentation. In my example, I created another vSwitch with no uplinks and portgroup for Auto Deploy network

Step 2 - Once the VCSA is powered on, go ahead and SCP the scripts to virtual machine. The first script that we will need to execute is the setupNetwork.sh and you will need to edit the following variables:

VCENTER_IP_ADDRESS_1=192.168.1.150
VCENTER_NETMASK_1=255.255.255.0
VCENTER_GATEWAY=192.168.1.1
VCENTER_IP_ADDRESS_2=172.30.0.1
VCENTER_NETMASK_2=255.255.255.0
VCENTER_HOSTNAME=vcenter.primp-industries.com
DOMAIN_LIST=primp-industries.com
DNS_LIST=192.168.1.1

Note: To ensure that you do not accidentally run the script without changing out the variables, there is another variable called ACTUALLY_READ_SCRIPT that needs to be changed from "no" to "yes" else the script will not execute.

Step 3 - Next we need to configure the vCenter Server, we will need to execute the configureVCSA51.sh which will configure the embedded SSO Database as well as the database of the vCenter Server. You do not need to edit any variables in this script

Step 4 - Finally, we need to configure our DHCP, TFTP, Auto Deploy services as well extracting the ESXi offline depot image and preparing it for use with Auto Deploy. You will need to edit the following variables before running the setupvGhettoLab.sh script.

DHCP_SUBNET=172.30.0.0
DHCP_NETMASK=255.255.255.0
DHCP_START_RANGE=172.30.0.100
DHCP_END_RANGE=172.30.0.200
DHCP_INTEFACE=eth1
TFTP_SERVER=172.30.0.1
VCSA_SERVER=192.168.1.150
ESXI_OFFLINE_DEPOT=/root/VMware-ESXi-5.1.0-799733-depot.zip
ESXI_REPO_PATH=/etc/vmware-vpx/docRoot
ESXI_REPO_DIR=ESXi-5.1.0

Note: For the Image Profile and Auto Deploy rule creation, if you wish for the script to execute them automatically versus echoing to the screen, remove the "echo" statement as well as the double quotes from the following so the last three lines look like this:

pxe-profile-cmd create $(cat /tmp/VIBS) ${ESXI_REPO_DIR}
rule-cmd create -i pxe:${ESXI_REPO_DIR} ${AUTO_DEPLOY_RULE} vendor=='VMware, Inc.'
rule-set-cmd set ${AUTO_DEPLOY_RULE}

Step 5 - You are now ready to create your nested ESXi virtual machines. You can use RVC as shown in the presentation (there is a slide at the very end which lists the commands) or you can connect to vSphere Web Client and create the ESXi virtual machines the traditional way via the GUI.

After updating the DHCP configurations with the new MAC Addresses from your nested ESXi virtual machines, you should then see Auto Deploy automatically provision your ESXi hosts and join them to the VCSA you deployed earlier.

Additional Links:

  • vInception #NotSupported Slides Posted

 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: appliance, auto deploy, dhcp, esxi, esxi5.1, notsupported, ruby vsphere console, rvc, tftp, vcsa, vcva, vmworld, vSphere, vSphere 5.1

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy