• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

nested

Virtual NVMe and Nested ESXi 6.5?

10/26/2016 by William Lam 4 Comments

After publishing my Nested ESXi enhancements for vSphere 6.5 article, I had received a number of questions on whether the new Virtual NVMe (vNVMe) capability introduced in the upcoming vSphere 6.5 release would also work with a Nested ESXi VM? The answer is yes, similiar to PVSCSI and VMXNET3, we also have an NVMe driver for ESXi running in VM.

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware, please use at your own risk.

To consume the new vNVMe for a Nested ESXi VM, you will need to use the latest ESXi 6.5 and later compatibility (vHW 13). Once that has been done, you can then add the a new NVMe Controller to your Nested ESXi VM and then assign that to one of the virtual disks as shown in the screenshot below.

nested-esxi-65-nvme-1
Next, you would install ESXi 6.5 as you normally would and the NVMe controller will automatically be detected and driver will be loaded. In the example below, you can see I only have a single disk which ESXi itself is installed on and it is backed by the NVMe Controller.

nested-esxi-65-nvme-0
One of the biggest benefit of using an NVMe interface over the traditional SCSI is that it can significantly reduce the amount of overhead compared to the SCSI protocol which in turn consumes less CPU cycles as well as reducing the overall amount of IO latency for your VM workloads. Obviously, when using it inside of a Nested ESXi VM, YMMV but hopefully you should also see an improvement there as well. For those who plan to give this a try in their environment, it would be good to hear what type of use cases you might have in mind for this and if you have any feedback (good/bad), feel free to leave a comment.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Not Supported, vSphere 6.5 Tagged With: nested, Nested ESXi, nested virtualization, NVMe, vSphere 6.5

Nested ESXi Enhancements in vSphere 6.5

10/19/2016 by William Lam 12 Comments

As many of you have probably heard by now, vSphere 6.5 was just announced at VMworld Barcelona this week and it is packed with a ton of new features and capabilities which you can read more about here. One area that is near and dear to me and has not been covered are some of the Nested ESXi enhancements that have been made in vSphere 6.5.

To be clear, VMware has NOT changed its support stance in vSphere 6.5. Both Nested ESXi as well as general Nested Virtualization is still NOT officially supported. Okay, so thats out of the way now, lets see what is new?

  • Paravirtual SCSI (PVSCSI) support
  • GuestOS Customization
  • Pre-vSphere 6.5 enablement on vSphere 6.0 Update 2
  • Virtual NVMe support

Lets take a closer look at each of these enhancements.

Paravirtual SCSI (PVSCSI) support

When vSphere 6.0 Update 2 was released, I had hinted that PVSCSI support might be a possibility in the near future. I am happy to announce that this is now possible with running Nested ESXi and having the guestOS as ESXi 6.5. In vSphere 6.5, a new GuestOS type has been introduced called vmkernel65 which is optimized for running ESXi 6.5 in a VM as shown in the screenshot below.

nested-esxi-enhancements-in-vsphere-6-5-4
As you can see from the VM Virtual Hardware configuration screen below, both the PVSCSI and VMXNET3 adapter are now the recommended default when creating a Nested ESXi VM for running ESXi 6.5.

nested-esxi-enhancements-in-vsphere-6-5-1
Similiar to the VMXNET3 driver, the PVSCSI driver is automatically bundled within the version of VMware Tools for Nested ESXi. That is to say, the drivers are included in the default ESXi image itself and is ONLY activated when ESXi detects that it is running inside of a VM. From a user standpoint, this means there are no additional configurations or installations that is required. You simply select ESXi 6.5 as the GuestOS and install ESXi as you normally would and this will automatically be enabled for you.

The only requirement to leverage this new capability is that BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. The underlying physical ESXi host can either be ESXi 6.0 or ESXi 6.5. In addition to new virtual hardware defaults, I have also found that the new ESXi 6.5 GuestOS type now uses EFI firmware over the legacy BIOS compared to previous ESXi 6.x/5.x/4.x GuestOS types.

For customers who wish to push their storage IO a bit more for Nested ESXi guests, this is a great addition, especially with lower overhead when using a PVSCSI adapter.

GuestOS Customization

One of the very last capability that has been missing from Nested ESXi is the ability to perform a simple GuestOS customization when cloning or deploying a VM from template running Nested ESXi. Today, you can deploy my Nested ESXi Virtual Appliance which basically provides you with the ability to customize your deployment but would it not be great if this was native in the platform? Well, I am pleased to say this is now possible!

When you go and clone or deploy a VM from template that is a Nested ESXi VM, you will now have the option to select the Customize guest OS option. As you can see from the screenshot below, you can now create a new Customization Spec which is based on the Linux customization spec. The customization only covers networking configuration (IP Address, Netmask, Gateway and Hostname) and only applies it to the first VMkernel interface, all others will be ignored. The thinking here is that once you have your Nested ESXi VM on the network, you can then fully manage and configure it using the vSphere API rather than re-creating the same functionality just for cloning.

nested-esxi-enhancements-in-vsphere-6-5-2
To use this new Nested ESXi GuestOS customization, there are two things you will need to do:

  • Perform two configuration changes within the Nested ESXi VM which will prepare them for cloning. You can find the configuration changes described in my blog post here
  • Ensure BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. This means that your underlying physical vSphere infrastructure can be running either vSphere 6.0 Update 2 or vSphere 6.5

You can monitor the progress of the guest customization by going to the VM's Monitor->Tasks & Events using the vSphere Web Client or vSphere API if you are doing this programmatically. Below is a screenshot of a successful Nested ESXi guest customization. If there are any errors, you can take a look at /var/log/vmware-imc/toolsDeployPkg.log within the cloned Nested ESXi VM to determine what went wrong.

nested-esxi-enhancements-in-vsphere-6-5-3
I know this will be a very welcome capability for customers who extensively use the guest customization feature or if you just want to quickly clone an existing Nested ESXi VM that you have already configured.

Pre-vSphere 6.5 enablement in vSphere 6.0 Update 2

By now, you probably have figured out what this last enhancement is all about 🙂 It is exactly as it sounds, we are enabling customers to try out ESXi 6.5 by running it as a Nested ESXi VM on your existing vSphere 6.0 environment and specifically the Update 2 release (this includes both vCenter Server as well as ESXi). Although this has always been possible with past releases of vSphere running newer versions, we are now pre-enabling ESXi 6.5 specific Nested ESXi capabilities in the latest release of vSphere 6.0 Update 2. This means when vSphere 6.5 is generally available, you will be able to test drive some of the new Nested ESXi 6.5 capabilities that I had mentioned on your existing vSphere infrastructure. This is pretty darn cool if you ask me!?

Virtual NVMe support

I had a few folks ask on whether the upcoming Virtual NVMe capability in vSphere 6.5 would be possible with Nested ESXi and the answer is yes. Please have a look at this post here for more details.

For those of you who use Nested ESXi, hopefully you will enjoy these new updates! As always, if you have any feedback or requests, feel free to leave them on my blog and I will be sure to share it with the Engineering team and who knows, it might show up in the future just like some of these updates which have been requested from customers in the past 😀

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, vSphere 6.5 Tagged With: guest customization, nested, Nested ESXi, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2, vSphere 6.5

VM serial logging to the rescue for capturing Nested ESXi PSOD

03/21/2016 by William Lam Leave a Comment

I frequently deploy pre-releases of our software to help test and provide early feedback to our Engineering teams. One piece of software that I deploy some what frequently is our ESXi Hypervisor and the best way to deploy it, is of course inside of a Virtual Machine or commonly referred to as Nested ESXi.

Most recently while testing a new ESXi build in my lab (screenshot below is for demo purposes, not the actual PSOD image), I encountered an ESXi purple screen of death (PSOD) during the bootup of the ESXi Installer itself. Since ESXi had not been installed, there was no place for ESXi to actually store the core dumps which made it challenging when filing a bug with Engineering as screenshots may not always contain all the necessary details.

Screen Shot 2016-03-21 at 9.26.08 AM
Luckily, because we are running in a VM, a really neat feature that VMware has supported for quite some time now is configuring a virtual serial port for logging purposes. In fact, one of the neatest feature from a troubleshooting standpoint was the introduction of the Virtual Serial Port Concentrator (vSPC) feature in vSphere 5.0 which allowed a VM to log directly to a serial console server just like you would for physical servers. You of course had few other options of either logging directly to the serial port of the physical ESXi, named pipe or simply to a file that lived on a vSphere Datastore.

Given this was a home lab setup, the easiest method was to simply output to a file. To add a virtual serial port, you can either use the vSphere Web/C# Client or the vSphere APIs. Since this is not something I need to do often, I just used the UI. Below is a screenshot using the vSphere Web Client and once you have added the virtual serial port, you need to specify the filename and where to the store the output file by clicking on the "Browse" button.

vm-serial-logging
If the GuestOS which includes ESXi has been configured to output to a serial port, the next time there is an issue and you can easily captured the output to a file instead of just relying on a screenshot. One additional tip which might be useful is by default, vSphere will prompt whether you want to replace or append to the configured output file. If you wish to always replace, you can add the following VM Advanced Setting and you will not get prompted in the UI.

answer.msg.serial.file.open = "Replace"

Virtual serial ports are supported on both vSphere (vCenter Server + ESXi) as well as our hosted products VMware Fusion and Workstation.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, Nested Virtualization, Workstation Tagged With: esxi, fusion, nested, nested virtualization, psod, serial logging, vSphere, workstation

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03/14/2016 by William Lam 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tagged With: esxi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

Deploying Nested ESXi is even easier now with the ESXi Virtual Appliance

12/14/2015 by William Lam 90 Comments

Several months back I had built an ESXi Virtual Appliance that allows anyone to quickly stand up a fully functional Nested ESXi VM which includes guest customization such as networking, NTP, syslog, passwords, etc. The virtual appliance was initially built for my own personal use as I found myself constantly rebuilding my lab environment for evaluating and breaking new VMware software. I figured if this was useful for myself, it probably could benefit others at VMware and I posted the details internally on our Socialcast forum. Since then, I have received numerous stories on how helpful the ESXi Virtual Appliance has been for both our Field and Engineering for setting up demos, POCs, evaluations, etc.

Most recently, I was contacted by Massimo Re Ferre' (crazy Mainframe guy ;)) who works over in our Cloud Native Apps team and was working on a pretty cool project with Photon Controller which was recently open sourced. He was interested in leveraging the ESXi Virtual Appliance along with using VMware AppCatalyst to make it super simple for anyone to try out the Photon Controller in their own environment. Over the last couple of weeks, I have been working closely with Massimo on incorporating on his requirements for the Photon Controller POC back into my ESXi Virtual Appliance. My original goal for the appliance was to keep it generic so that it could cater to multiple use cases and Photon Controller POC was just another neat solution that could be built on top of.

I just found out today that new Photon Controller POC has just been released and you can find more details in the links below:

  • Photon Controller main page
  • Getting started guide for deploying Photon Controller on OS X
  • Getting started guide for deploying Photon Controller on Windows
  • Photon Controller Google Group

As part of the release, the ESXi Virtual Appliance is also made avialable which I thought was pretty cool! 😀 I highly recommend you check out the awesome work done by Massimo if you want to play with Photon Controller. This is a really easy way of getting started with Photon Controller and giving it a spin in your own environment.

Since the ESXi Virtual Appliance is now available externally, I wanted to share a few details about the appliance for those who might be interested in checking it out. As I mentioned earlier, the goal of the ESXi Virtual Appliance was to be generic and to be used as a build block that could enable different use cases such as spinning up a quick vSphere lab using it and the VCSA or putting together a fully functional VSAN lab in literally a couple of minutes (at the very bottom, I have a couple of PowerCLI scripts to demonstrate this). You could deploy 3 instances of the appliance to get a basic 3 Node VSAN Cluster or you could scale up to 64 Node VSAN Cluster all within just minutes. The limit is truly your imagination.

The appliance contains a pre-installed GA release of ESXi 6.0 Update 1. There are 11 OVF properties that are available for customizing the Nested ESXi VM which are shown in the table below. Once powered on, the default 60 day evaluation will start counting down as if you had manually installed ESXi yourself. In addition, the OVA also contains several optimizations for running Nested ESXi including the Mac Learn dvFilter params as well as other configurations for quickly setting up a VSAN environment which are also described below. I have also built the appliance to be easily consume in all VMware based environments including vSphere, vCloud Air, Fusion, Workstation, Player & AppCatalyst.

UPDATE (05/10/17) - Updated VA to latest ESXi 6.0u3 & 6.5d (vSAN 6.6)

  • ESXi 6.0 Update 3 Virtual Appliance download link
  • ESXi 6.5d Virtual Appliance download link

UPDATE (05/09/17) - The ESXi 5.5u3 has been decommissioned due to its limited use.

  • ESXi 5.5 Virtual Appliance download link (Decommissioned)

UPDATE (11/18/16) - ESXi 6.5 Virtual Appliance has been released and you can find the details here.

UPDATE (04/07/16) - Minor bug fix with refreshing auto-generated SSL Certificates, latest version is v5 for 6.0u2 VA and v2 for 5.5u3 VA

UPDATE (03/18/16) - I updated the ESXi 6.0 VA using vSphere 6.0 Update 2. It is now back to vHW10 for backwards compat and includes an All-Flash configuration + 2 VMXNET3 adapters and ready for VSAN 6.2 😀

UPDATE (03/01/16) - I have also created a new ESXi 5.5 VA using vSphere 5.5 Update 3b

OVF Property Description Type
guestinfo.hostname FQDN of the ESXi host string
guestinfo.ipaddress IP Address string
guestinfo.vlan VLAN ID string
guestinfo.netmask Netmask string
guestinfo.gateway Gateway string
guestinfo.dns DNS Server string
guestinfo.domain DNS Domain string
guestinfo.ntp NTP Server string
guestinfo.ssh  Whether or not SSH is enabled boolean
guestinfo.syslog Syslog Server string
guestinfo.password Root password for ESXi host string
guestinfo.createvmfs Whether to automatically creates a VMFS datastore (datastore1) on largest VMDK boolean

The ESXi 6.x Virtual Appliance includes the following configuration:

  • ESXi 6.0 Update 2
  • GuestType: ESXi 5.x (backwards compat)
  • vHW 10
  • 2 vCPU
  • 6GB vMEM
  • 2 x vmxnet3 vNIC
  • 1 x 2GB HDD (ESXi Installation)
  • 1 x 4GB SSD (for use w/VSAN, empty by default)
  • 1 x 8GB SSD (for use w/VSAN, empty by default)
  • VHV added (more info here)
  • dvFilter Mac Learn VMX params added (more info here)
  • disk.enableUUID VMX param added
  • VSAN traffic tagged on vmk0
  • Disabled VSAN device monitoring for home labs (more info here)

The ESXi 5.x Virtual Appliance includes the following configuration:

  • ESXi 5.5 Update 3b
  • GuestType: ESXi 5.x
  • vHW 10
  • 2 vCPU
  • 6GB vMEM
  • 2 x vmxnet3 vNIC
  • 1 x 2GB HDD (ESXi Installation)
  • 1 x 4GB SSD (for use w/VSAN, empty by default)
  • 1 x 8GB HDD (for use w/VSAN, empty by default)
  • VHV added (more info here)
  • dvFilter Mac Learn VMX params added (more info here)
  • disk.enableUUID VMX param added
  • VSAN traffic tagged on vmk0
  • Disabled VSAN device monitoring for home labs (more info here)
  • ESXi Embedded Host Client (more info here)

If you do not wish to use VSAN, there is an OVF property that allows you to specify whether or not a default VMFS datastore is created. You can increase the capacity of any of the disks after deployment (if you wish for the automatically VMFS creation, you will need to expand the disk prior to powering on the VM). Below are the different methods in which you can deploy the ESXi Virtual Appliance which includes vSphere, vCloud Air, Fusion, Workstation, Player & AppCatalyst. The idea is that you can easily setup Nested ESXi on any VMware based hypervisor and be up and running in just minutes!

Option 1 - Deploy to vSphere environment w/vCenter Server or vCloud Air

Download the OVA (or you can just paste the URL link) and import that into your vCenter Server using the vSphere Web Client. Make sure to accept the "Extra Configuration" when prompted and then fill out the 11 OVF properties which will allow you to customize the Nested ESXi VM to your environment. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM. Once powered on, the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM. Below are a couple of screenshots

Screen Shot 2015-12-10 at 1.35.02 PM
Screen Shot 2015-12-10 at 1.36.04 PM
Screen Shot 2015-12-10 at 1.38.51 PM

Option 2 - Deploy to ESXi

Download the OVA and import it into ESXi using either the vSphere C# Client but do not power it on. Since ESXi does not support OVF properties, you will need to add the following guestinfo*. properties as shown in "Option 3" below by using the VM Advanced Settings UI.

Screen Shot 2015-12-11 at 6.48.51 AM
If you prefer NOT to mess around with manually adding these VM Advanced Settings which can also be automated using the vSphere API or PowerCLI, one additional method which CAN make use of the OVF properties is by using ovftool which is a CLI to import the OVA and using the --injectOvfEnv option that was added in ovftool version 4.x You can find more details in this blog post here.

Option 3 - Deploy to VMware Workstation, Fusion or Player

Download the OVA and import it into Fusion/Workstation but do not power it on. You will then need to edit the VMX file and add the following guestinfo.* properties as shown below since Fusion/Workstation do not support OVF properties. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM. Once you have saved the changes, you can then power on the Nested ESXi VM and the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM.

1
2
3
4
5
6
7
8
9
10
11
guestinfo.hostname = "vsan-1.primp-industries.com"
guestinfo.ipaddress = "172.16.78.90"
guestinfo.netmask = "255.255.255.0"
guestinfo.gateway = "172.16.78.1"
guestinfo.dns = "172.16.78.1"
guestinfo.domain = "primp-industries.com"
guestinfo.ntp = "172.16.78.1"
guestinfo.ssh = "True"
guestinfo.syslog = "192.168.1.100"
guestinfo.password = "VMware1!"
guestinfo.createvmfs = "False"

Option 4 - Deploy to VMware AppCatalyst

Download the the OVA and import that into AppCatalyst using ovftool but do not power it on. You will then need to edit the VMX file and add the following guestinfo.* properties as in the above example for Workstation/Fusion in addition to the following params listed below which are required for Nested ESXi to run in AppCatalyst. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM.

1
2
3
guestos = "vmkernel6"
virtualhw.version = "11"
svga.vgaOnly = "true"

You will also need to run the following command to allow promiscuous mode in AppCatalyst since there's no UI to prompt (this is only required once):

touch "/Library/Preferences/VMware AppCatalyst/promiscAuthorized"

Once you have saved the changes, you can then power on the Nested ESXi VM and the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM.

Option 5 - Deploy to vSphere using PowerCLI

I have also created two very simple PowerCLI scripts which demonstrates how you can easily deploy N-number of these Nested ESXi VMs and in fact, you can setup a fully functional VSAN Cluster in just under 5 minutes! You can find the two scripts below:

  • deploy_esxi_vsan_appliance.ps1
  • add_esxi_vsan_appliance_to_cluster.ps1

The first script will perform the deployment of the Nested ESXi VM, but you will first need to convert the OVA to an OVF because the Get-OvfConfiguration cmdlet does not support OVA. To properly convert the OVA, you will need ovftool and then run the following command:

ovftool.exe --allowAllExtraConfig --skipManifestCheck Nested_ESXi_Appliance.ova Nested_ESXi_Appliance.ovf

The second script will go ahead and add the deployed Nested ESXi VMs to a vSphere Cluster and then enable VSAN on the vSphere Cluster.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, Home Lab, Nested Virtualization, Not Supported, vSphere, vSphere 6.0, vSphere 6.5, Workstation Tagged With: esxi, nested, nested virtualization, ova, vSphere 6.0 Update 1, vSphere 6.5

VMware Tools for Nested ESXi updated to v1.2

08/20/2015 by William Lam 10 Comments

I just wanted to give everyone a heads up that we have a minor update to the VMware Tools for Nested ESXi Fling (esx-tools-for-esxi-9.7.2-0.0.5911061.i386.vib) and you can find the list of changes below.

What's new in version 1.2:

  • Larger resource pool for programs started using the VIX/Guest Operations API (resolves this issue)
  • Now supports Guest Time Synchronization
  • No longer require -f flag to install the VIB

If you have any feedback, feel free to leave a comment here or on the Flings page.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, vSphere 5.5, vSphere 6.0 Tagged With: esxi, nested, nested virtualization

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy