• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

esxi5.1

Nested Virtualization Resources

10/04/2012 by William Lam 7 Comments

Here is a consolidated page on all the articles that I have written about the Nested Virtualizatoin (nested ESXi, Hyper-V, etc) and all the goodies that are "Not Supported".

vSphere / vCloud 5.1

  • Having Difficulties Enabling Nested ESXi in vSphere 5.1?
  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

vSphere / vCloud 5.0

  • How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud

Additional Info/Tips/Tricks/

  • Nested ESXi 5.1 Supports VMXNET3 Network Adapter Type
  • How to Configure Nested ESXi 5 to Support EVC Clusters
  • How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5
  • How to Install VMware VSA in Nested ESXi 5 Host Using the GUI
  • Cool Undocumented Features in vCloud Director 1.5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud
  • Nested Virtualization APIs For vSphere & vCloud Director 5.1
  • How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director 
  • Will Intel’s VMCS Shadowing Feature Benefit VMware’s Nested Virtualization?
  • How to run Nested RHEV Hypervisor on ESXi? 
  • How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi
  • How to run Nested ESXi on top of a VSAN datastore? 
  • VMware Tools for Nested ESXi 
  • Why is Promiscuous Mode & Forged Transmits required for Nested ESXi?
  • How to properly clone a Nested ESXi VM?
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: amd-v, ept, esxi, esxi 5, esxi4, esxi4.1, esxi5.1, hyper-v, intel vt, nested, rvi, vhv, virtual hardware virtualization, vSphere, vSphere 4, vSphere 5, vSphere 5.1

Having Difficulties Enabling Nested ESXi in vSphere 5.1?

09/29/2012 by William Lam 21 Comments

I noticed there were a few folks having some difficulties enabling Nested ESXi (VHV Virtual Hardware Virtualization) in the latest release of ESXi 5.1 and I thought I share some additional info and tips on troubleshooting your setup in case you are running into similar problems.

*** DISCLAIMER **** This is not officially supported by VMware, do not bother asking if it is supported or calling into VMware support for details or help.

If you wish to run nested ESXi or other hypervisors on ESXi 5.1 and run 32-bit nested virtual machines, you must meet the following hardware requirement:

  • CPU supporting Intel VT-x or AMD-V

If you wish to run nested 64-bit virtual machines in your nested ESXi or other hypervisors, in addition to the requirement above, you must also meet the following hardware requirement:

  • CPU supporting Intel EPT or AMD RVI

If you only meet the first criteria, you CAN still install nested ESXi or other hypervisors on ESXi 5.1, BUT you will only be able to run 32-bit nested virtual machines. When you create your virtual machine shell using the new vSphere Web Client, in the expanded CPU view, the "Hardware Virtualization" box will be grayed out. This is expected as you do not have full support for VHV, but you can still continue with your installation of ESXi or other hypervisors.

In ESXi 5.0, you may have been able to run 64-bit nested virtual machines without EPT/RVI support but performance was extremely poor. With ESXi 5.1, VHV now requires EPT/RVI.

Note: During the installation of ESXi, you may see the following message "No Hardware Virtualization Support", you can just ignore it.

If you are using sites such as Intel's ark.intel.com to check your CPU requirements, be aware that it is COMMON even for the hardware vendors to publish incorrect information about their websites. However, there is a quick way you can validate on your ESXi host whether you have full VHV support.

In vSphere 5.1, there is a new capability property called nestedHVSupported which specifies whether your physical ESXi 5.1 host has full VHV support. This property will only be true IF your CPU has both Intel-VT+EPT or AMD-V+RVI. A quick and easy way to validate this is using the vSphere MOB to retrieve the value.

To check nestedHVSupported property, please enter the following into a web browser (substitute the IP Address/hostname of your ESXi host):

https://himalaya.primp-industries.com/mob/?moid=ha-host&doPath=capability

After you login, search for the nestedHVSupported property on the page and you should see a value of either true or false. As mentioned earlier, if it is false, you might still be able to install nested ESXi or other hypervisors but you will not be able to run nested 64-bit virtual machines. I would also recommend taking a look at your system BIOS to ensure things like Intel-VT/EPT and AMD-V/RVI are enabled and sometimes it might just be as simple as a BIOS upgrade (you can always confirm by contacting the hardware vendor if you have further questions).

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.

Additional Resources: 

  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi5.1, hyper-v, nested, vcd, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

2gbsparse Disk Format No Longer Working On ESXi 5.1

09/26/2012 by William Lam 4 Comments

I was recently made aware of an issue with my ghettoVCB script that after upgrading to ESXi 5.1, the ability to clone (or in this case backup) using the 2gbsparse disk format with vmkfstools was no longer working. The error that users were seeing was "The system cannot find the file specified." and I also confirmed this behavior by manually creating a VMDK and then trying to clone using the 2gbsparse format.

To give you some background, the 2gbsparse disk format is not a VMFS virtual disk format, it is part of the hosted desktop product (VMware Fusion, Workstation, Server & Player) disk format. This disk format was created to prevent cross-platform file system compatibility issue as pointed out in this VMware KB article. This issue does not exists on VMFS and hence this extra disk format is not necessary.

After some investigation, I found to use the 2gbsparse format in vmkfstools, you will need to load a specific VMkernel module called "multiextent". 2gbsparse was never officially supported on ESXi, you can not run a virtual machine with 2gbsparse disk format on ESXi and that is why a conversion maybe required when moving from a hosted product to ESXi. So by disabling unnecessary VMkernel modules that were not used made sense to help reduce amount of resources needed to load up. This is especially important with stateless deployments, where you want your ESXi host to load up as fast as possible.

Once you have enabled this VMkernel module, the 2gbsparse format will function again with vmkfstools. I also found that this was mentioned in the vSphere 5.1 release notes (yes, you should read the release notes)

To load the multiextent VMkernel module, run the following ESXCLI command:

esxcli system module load -m multiextent

To check whether the multiextent VMkernel module has loaded, run the following ESXCLI command:

esxcli system module list | grep multiextent

If you wish to persist this configuration after a system reboot, I found that you need to add the following command in a start-up script /etc/rc.local.d/local.sh as just setting the "enabled" flag is not sufficient for this particular VMkernel module.

localcli system module load -m multiextent

Note: We are using localcli because hostd may not be completely ready and you can either add a sleep/timer or just use localcli.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: 2gbsparse, esxcli, esxi5.1, localcli, multiextent, vmkernel module, vmkfstools, vSphere 5.1

A Pretty Cool Method of Upgrading to ESXi 5.1

09/18/2012 by William Lam 40 Comments

I recently came across an interesting article by Andreas Peetz which shows you how to patch an ESXi host using an image profile that is directly available on VMware's online depot within the ESXi shell. I knew that VMware had online depots for use with VUM and Auto Deploy but I was not aware of this particular method, especially directly from the host.

Disclaimer: This method assumes you can install the default ESXi Image Profile with no additional drivers or packages, else you may have connectivity issue after the upgrade. If you still need to customize the ESXi Image Profile before installation, you will still need to use something like Image Builder and then upload that to your online depot.

Note: There are many ways that you can patch/upgrade your ESXi hosts, here is another article that provides more details for command-line only methods.

Before you get started, you will need to make sure that your ESXi host has the httpClient firewall rule enabled, else you will not be able to connect to VMware's online depot. To enable this, run the following ESXCLI command:

esxcli network firewall ruleset set -e true -r httpClient

Also make sure that your ESXi host can reach the following URL (you can specify a proxy if needed):

https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

To view the available ESXi Image Profiles, run the following ESXCLI command (use the --proxy if you need to specify a proxy to reach VMware's online depot):

esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

If you are able to successfully connect to the online depot, you see a list of all the ESXi Image Profiles that are available to you. You will see two ESXi 5.1 Image Profiles (these were recently published), one with VMware Tools and one without VMware Tools.

Note: Before you begin, make sure you do not have any running VMs and put your host into maintenance mode.

Let's go ahead and upgrade our ESXi 5.0 Update 1 host to latest ESXi 5.1. To install the new Image Profile, run the following command:

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.1.0-799733-standard

This can take a few minutes to complete depending on how fast you can pull down the Image Profile. Once it is done, you will see all the new VIBs that have been updated and you will be asked to reboot for the changes to go into effect and then you will be running ESXi 5.1! Pretty cool IMO!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXCLI, ESXi, vSphere, vSphere 5.5, vSphere 6.0, vSphere 6.5 Tagged With: esxcli, esxi5.1, firewall, image profile, upgrade, vSphere 5.1

Automating ESXi 5.1 Kickstart Tips & Tricks

09/17/2012 by William Lam 38 Comments

There is not a whole lot of changes for kickstart configurations between ESXi 5.1 and ESXi 5.0, majority of the tips and tricks noted in the ESXi 5.0 kickstart guide are still relevant for ESXi 5.1. Below are a few new tips and tricks (some old) as well as a complete working ESXi 5.1 kickstart example that can be used as a reference.

Tip #1

There are 82 new ESXCLI commands, number of which are new as well as enhancements to existing commands and operations. The kickstart sample below converts many of the legacy esxcfg-* and vim-cmd/vsish commands over to ESXCLI such as, here are just a few:

  • esxcli network ip route [ipv4|ipv6] (VMkernel routes)
  • esxcli system snmp (SNMP)
  • esxcli system maintenanceMode (maintenance mode)
  • esxcli network ip interface tag (tag VMkernel traffic types)

Please refer to the vCLI/ESXCLI release notes for all new ESXCLI commands.

Tip #2

In previous releases of ESXi, you could add custom commands in /etc/rc.local which will automatically execute after all startup scripts have finished. With the latest release of ESXi 5.1, this functionality has been moved to /etc/rc.local.d/local.sh. If you try to edit the old file, you will find that it does not allow you to write any changes. This will be important as you migrate to ESXi 5.1 kickstart if you make use of this file for any custom startup commands.

Tip #3

To run nested ESXi and other hypervisors in ESXi 5.1, you need to to specify new vhv.enable parameter, please take a look at this article for more details.

Tip #4

There is a new ESXi Advanced Setting in ESXi 5.1 that allows you to control when an interactive ESXi Shell session will automatically logout based on configured idle time (in seconds). You can find more details in this blog article by Kyle Gleed.

esxcli system settings advanced set -o /UserVars/ESXiShellInteractiveTimeOut -i 3600

Tip #5

By default, an ESXi host will automatically grant root permission to the "ESX Admins" group for use when a host is joined to an Active Directory domain. You can alter the default group name if you already have an AD group defined by using the following command:

vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.hostsvc.esxAdminsGroup string "Ghetto ESXi Admins"

Tip #6

A really neat feature in ESXi 5.1 is the ability to control which local users have full admin privileges to the DCUI, this is really useful for troubleshooting and you want to provide DCUI console access but not administrative permissions on the ESXi host itself. You can specify a list of local users by using the following command:

vim-cmd hostsvc/advopt/update DCUI.Access string root,william,tuan

Tip #7

If you wish to prevent VMs from sending out BPDU (Bridge Protocol Data Unit) packets, there is a new global configuration on an ESXi 5.1 host which you can set. By default, this setting is disabled and you will need to configure this on every ESXi host if you wish to block VM guests from sending out BPDU packets.

esxcli system settings advanced set -o /Net/BlockGuestBPDU -i 1

Tip #8

Here's an article about enabling/disabling IPv6 using ESXCLI

Tip #9

Here's an article about creating custom VIB for ESXi 5.1

Here is a complete working example of an ESXi 5.1 kickstart that can help you convert your existing ESX(i) 4.x/5.x to ESXi 5.1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
# Sample kickstart for ESXi 5.1
# William Lam
# www.virtuallyghetto.com
#########################################
accepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot
%include /tmp/networkconfig
 
%pre --interpreter=busybox
 
# extract network info from bootup
VMK_INT="vmk0"
VMK_LINE=$(localcli network ip interface ipv4 get | grep "${VMK_INT}")
IPADDR=$(echo "${VMK_LINE}" | awk '{print $2}')
NETMASK=$(echo "${VMK_LINE}" | awk '{print $3}')
GATEWAY=$(localcli network ip route ipv4 list | grep default | awk '{print $3}')
DNS="172.30.0.100,172.30.0.200"
HOSTNAME=$(nslookup "${IPADDR}" "${DNS}" | grep Address | grep "${IPADDR}" | awk '{print $4}')
echo "network --bootproto=static --addvmportgroup=false --device=vmnic0 --ip=${IPADDR} --netmask=${NETMASK} --gateway=${GATEWAY} --nameserver=${DNS} --hostname=${HOSTNAME}" > /tmp/networkconfig
%firstboot --interpreter=busybox
 
# enable VHV (Virtual Hardware Virtualization to run nested 64bit Guests + Hyper-V VM)
grep -i "vhv.enable" /etc/vmware/config || echo "vhv.enable = \"TRUE\"" >> /etc/vmware/config
# enable & start remote ESXi Shell  (SSH)
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh
# enable & start ESXi Shell (TSM)
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell
# supress ESXi Shell shell warning - Thanks to Duncan (http://www.yellow-bricks.com/2011/07/21/esxi-5-suppressing-the-localremote-shell-warning/)
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1
# ESXi Shell interactive idle time logout
esxcli system settings advanced set -o /UserVars/ESXiShellInteractiveTimeOut -i 3600
# Change the default ESXi Admins group "ESX Admins" to a custom one "Ghetto ESXI Admins" for AD
vim-cmd hostsvc/advopt/update Config.HostAgent.plugins.hostsvc.esxAdminsGroup string "Ghetto ESXi Admins"
# Users that will have full access to DCUI even if they don't have admin permssions on ESXi host
vim-cmd hostsvc/advopt/update DCUI.Access string root,william,tuan
# Block VM guest BPDU packets, global configuration
esxcli system settings advanced set -o /Net/BlockGuestBPDU -i 1
# copy SSH authorized keys & overwrite existing
wget http://air.primp-industries.com/esxi5/id_dsa.pub -O /etc/ssh/keys-root/authorized_keys
# disable SSH keys - uncomment the next section
# sed -i 's/AuthorizedKeysFile*/#AuthorizedKeysFile/g' /etc/ssh/sshd_config
# rename local datastore to something more meaningful
vim-cmd hostsvc/datastore/rename datastore1 "$(hostname -s)-local-storage-1"
# assign license
vim-cmd vimsvc/license --set AAAAA-BBBBB-CCCCC-DDDDD-EEEEE
## SATP CONFIGURATIONS ##
esxcli storage nmp satp set --satp VMW_SATP_SYMM --default-psp VMW_PSP_RR
esxcli storage nmp satp set --satp VMW_SATP_DEFAULT_AA --default-psp VMW_PSP_RR
###########################
## vSwitch configuration ##
###########################
#####################################################
# vSwitch0 : Active->vmnic0,vmnic1 Standby->vmnic2
#       failback: yes
#       faildectection: beacon
#       load balancing: portid
#       notify switches: yes
#       avg bw: 1000000 Kbps
#       peak bw: 1000000 Kbps
#       burst size: 819200 KBps
#       allow forged transmits: yes
#       allow mac change: no
#       allow promiscuous no
#       cdp status: both
# attach vmnic1,vmnic2 to vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic1 --vswitch-name vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic2 --vswitch-name vSwitch0
# configure portgroup
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork1 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork1 --vlan-id 100
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork2 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork2 --vlan-id 200
esxcli network vswitch standard portgroup add --portgroup-name VMNetwork3 --vswitch-name vSwitch0
esxcli network vswitch standard portgroup set --portgroup-name VMNetwork3 --vlan-id 333
# configure cdp
esxcli network vswitch standard set --cdp-status both --vswitch-name vSwitch1
### FAILOVER CONFIGURATIONS ###
# configure active and standby uplinks for vSwitch0
esxcli network vswitch standard policy failover set --active-uplinks vmnic0,vmnic1 --standby-uplinks vmnic2 --vswitch-name vSwitch0
# configure failure detection + load balancing (could have appended to previous line)
esxcli network vswitch standard policy failover set --failback yes --failure-detection beacon --load-balancing portid --notify-switches yes --vswitch-name vSwitch0
### SECURITY CONFIGURATION ###
esxcli network vswitch standard policy security set --allow-forged-transmits yes --allow-mac-change no --allow-promiscuous no --vswitch-name vSwitch0
### SHAPING CONFIGURATION ###
esxcli network vswitch standard policy shaping set --enabled yes --avg-bandwidth 100000 --peak-bandwidth 100000 --burst-size 819200 --vswitch-name vSwitch0
#####################################################
# vSwitch1 : Active->vmnic3,vmnic4 Standby->vmnic5
#       failback: no
#       faildectection: link
#       load balancing: mac
#       notify switches: no
#       allow forged transmits: no
#       allow mac change: no
#       allow promiscuous no
#       cdp status: listen
#       mtu: 9000
# add vSwitch1
esxcli network vswitch standard add --ports 256 --vswitch-name vSwitch1
# attach vmnic3,4,5 to vSwitch0
esxcli network vswitch standard uplink add --uplink-name vmnic3 --vswitch-name vSwitch1
esxcli network vswitch standard uplink add --uplink-name vmnic4 --vswitch-name vSwitch1
esxcli network vswitch standard uplink add --uplink-name vmnic5 --vswitch-name vSwitch1
# configure mtu + cdp
esxcli network vswitch standard set --mtu 9000 --cdp-status listen --vswitch-name vSwitch1
# configure portgroup
esxcli network vswitch standard portgroup add --portgroup-name NFS --vswitch-name vSwitch1
esxcli network vswitch standard portgroup add --portgroup-name FT_VMOTION --vswitch-name vSwitch1
esxcli network vswitch standard portgroup add --portgroup-name VSPHERE_REPLICATION --vswitch-name vSwitch1
### FAILOVER CONFIGURATIONS ###
# configure active and standby uplinks for vSwitch1
esxcli network vswitch standard policy failover set --active-uplinks vmnic3,vmnic4 --standby-uplinks vmnic5 --vswitch-name vSwitch1
# configure failure detection + load balancing (could have appended to previous line)
esxcli network vswitch standard policy failover set --failback no --failure-detection link --load-balancing mac --notify-switches no --vswitch-name vSwitch1
### SECURITY CONFIGURATION ###
esxcli network vswitch standard policy security set --allow-forged-transmits no --allow-mac-change no --allow-promiscuous no --vswitch-name vSwitch1
# configure vmkernel interface for NFS traffic, FT_VMOTION and VSPHERE_REPLICATION traffic
VMK0_IPADDR=$(esxcli network ip interface ipv4 get | grep vmk0 | awk '{print $2}')
VMK1_IPADDR=$(echo ${VMK0_IPADDR} | awk '{print $1".51."$3"."$4}' FS=.)
VMK2_IPADDR=10.10.0.2
VMK3_IPADDR=10.20.0.2
esxcli network ip interface add --interface-name vmk1 --mtu 9000 --portgroup-name NFS
esxcli network ip interface ipv4 set --interface-name vmk1 --ipv4 ${VMK1_IPADDR} --netmask 255.255.255.0 --type static
esxcli network ip interface add --interface-name vmk2 --mtu 9000 --portgroup-name FT_VMOTION
esxcli network ip interface ipv4 set --interface-name vmk2 --ipv4 ${VMK2_IPADDR} --netmask 255.255.255.0 --type static
esxcli network ip interface add --interface-name vmk3 --mtu 9000 --portgroup-name VSPHERE_REPLICATION
esxcli network ip interface ipv4 set --interface-name vmk3 --ipv4 ${VMK3_IPADDR} --netmask 255.255.255.0 --type static
# Configure VMkernel traffic type (Management, VMotion, faultToleranceLogging, vSphereReplication)
esxcli network ip interface tag add -i vmk2 -t Management
esxcli network ip interface tag add -i vmk2 -t VMotion
esxcli network ip interface tag add -i vmk2 -t faultToleranceLogging
esxcli network ip interface tag add -i vmk3 -t vSphereReplication
# Configure VMkernel routes
esxcli network ip route ipv4 add -n 10.20.183/24 -g 172.30.0.1
esxcli network ip route ipv4 add -n 10.20.182/24 -g 172.30.0.1
# Disable IPv6 for VMkernel interfaces
esxcli system module parameters set -m tcpip3 -p ipv6=0
### MOUNT NFS DATASTORE ###
esxcli storage nfs add --host 172.51.0.200 --share /volumes/Primp/primp-6 --volume-name himalaya-NFS-primp-6
### ADV CONFIGURATIONS ###
esxcli system settings advanced set --option /Net/TcpipHeapSize --int-value 30
esxcli system settings advanced set --option /Net/TcpipHeapMax --int-value 120
esxcli system settings advanced set --option /NFS/HeartbeatMaxFailures --int-value 10
esxcli system settings advanced set --option /NFS/HeartbeatFrequency --int-value 20
esxcli system settings advanced set --option /NFS/HeartbeatTimeout --int-value 10
esxcli system settings advanced set --option /NFS/MaxVolumes --int-value 128
### SYSLOG CONFIGURATION ###
esxcli system syslog config set --default-rotate 20 --loghost vcenter50-3.primp-industries.com:514,udp://vcenter50-3.primp-industries.com:514,ssl://vcenter50-3.primp-industries.com:1514,udp://vcenter50-3.primp-industries.com:514,udp://vcenter50-3.primp-industries.com:514,ssl://vcenter50-3.primp-industries.com:1514,ssl://vcenter50-3.primp-industries.com:1514
# change the individual syslog rotation count
esxcli system syslog config logger set --id=hostd --rotate=20 --size=2048
esxcli system syslog config logger set --id=vmkernel --rotate=20 --size=2048
esxcli system syslog config logger set --id=fdm --rotate=20
esxcli system syslog config logger set --id=vpxa --rotate=20
### NTP CONFIGURATIONS ###
cat > /etc/ntp.conf << __NTP_CONFIG__
restrict default kod nomodify notrap noquery nopeer
restrict 127.0.0.1
server 0.vmware.pool.ntp.org
server 1.vmware.pool.ntp.org
__NTP_CONFIG__
/sbin/chkconfig ntpd on
### FIREWALL CONFIGURATION ###
# enable firewall
esxcli network firewall set --default-action false --enabled yes
# services to enable by default
FIREWALL_SERVICES="syslog sshClient ntpClient updateManager httpClient netdump"
for SERVICE in ${FIREWALL_SERVICES}
do
esxcli network firewall ruleset set --ruleset-id ${SERVICE} --enabled yes
done
# backup ESXi configuration to persist changes
/sbin/auto-backup.sh
# enter maintenance mode
esxcli system maintenanceMode set -e true
# copy %first boot script logs to persisted datastore
cp /var/log/hostd.log "/vmfs/volumes/$(hostname -s)-local-storage-1/firstboot-hostd.log"
cp /var/log/esxi_install.log "/vmfs/volumes/$(hostname -s)-local-storage-1/firstboot-esxi_install.log"
# Needed for configuration changes that could not be performed in esxcli
esxcli system shutdown reboot -d 60 -r "rebooting after host configurations"

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxcli, esxi5.1, kickstart, ks.cfg, vSphere 5.1

Disabling IPv6 via Command-Line For ESXi 5.1 (Without Automatic Host Reboot)

09/14/2012 by William Lam 15 Comments

IPv6 for the VMkernel interface is now automatically enabled by default for the latest release of ESXi 5.1 and you may have also noticed the additional IP Address in DCUI after the host boots up.

IPv6 support has been around for awhile now and you can enable IPv6 by using the old vSphere C# Client or the new vSphere Web Client. If you enable or disable IPv6, you will need to perform a system reboot for the changes to go into effect. You also have the ability to enable/disable it via the DCUI, which also has been around for awhile as well.

UPDATE: 07/20/15 - For ESXi 6.0, the VMkernel module is name is now tcpip4 instead of tcpip3.

There is one very important thing to note if you do enable/disable IPv6 on the DCUI, after you made your changes and you wish to apply, there is a very important confirmation box that is displayed.

Carefully read the last sentence which is underline in red "In case IPv6 has been enabled or disabled this will restart your host". If you are not careful in reading the confirmation screen, you may hit yes and your host will issue a reboot. If you are going to use the DCUI to enable or disable IPv6, make sure you do not have any running VMs on your host and you should already have your host maintenance mode when making configuration changes to your host.

In addition to the two methods listed above (vSphere Web Client/C# CLient and DCUI) you can easily enable/disable IPv6 using ESXCLI (my preferred method) and restart the ESXi host when you get a chance.

To view whether IPv6 is currently enabled, run the following ESXCLI command (ESXi 5.5 Update 1 the VMkernel module is now called tcpip4):

esxcli system module parameters list -m tcpip3

As you can see from the screenshot above, ipv6 property is set to 1 which means it is enabled.

To disable IPv6, you just need to set the property to 0, run the following ESXCLI command:

esxcli system module parameters set -m tcpip3 -p ipv6=0

We can now reconfirm by re-running our list operation to ensure the changes were made successfully. All that is left is to perform a system reboot, you can either type in "reboot" or use the new ESXCLI 5.1 command:  

esxcli system shutdown reboot -d 60 -r "making IPv6 config changes"

Note: You can run the ESXCLI command locally on the ESXi Shell or you can run the same command remotely by specifying additional connection options & proxy through vCenter Server if you wish. Take a look here for additional connection options for ESXCLI.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, vSphere 5.5, vSphere 6.0 Tagged With: cli, esxcli, esxi 5, esxi5.1, ipv6, vSphere 5, vSphere 5.1

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy