• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Search Results for: Intel NUC

Running Nested ESXi / VSAN Home Lab on Ravello

04/14/2015 by William Lam 3 Comments

nested_esxi_on_ravello
There are many options when it comes to building and running your own vSphere home lab. There are going to be different pros and cons to each of these solutions which you will need to evaluate things like cost, performance, maintenance, ease of use and complexity to name a few. Below is a list of the currently available options to you today.

Home Lab Options:


On-Premises

  • Using hardware on the VMware HCL
  • Using Apple Mac Mini, Intel NUC, etc.
  • Using whitebox or off the shelf hardware

Off-Premises (hosted)

  • VMware HOL
  • VMware vCloud Air or other vCloud Air Service Providers
  • Colo-located labs

For example, you could purchase a couple of Apple Mac Mini's and build out a decent size vSphere environment, but it could potentially be costly and not to mention a bit limited on the memory options. Compared to other platforms, it is pretty energy efficient and easy to use and maintain. If you did not want to manage any hardware at all, you could look at a hosted or an on-demand lab such as vCloud Air which can run Nested ESXi unofficially or anyone of the many vCloud Air Service Providers. Heck, you could even use VMware Hands On Lab, though the access will be limited as you will be constrained by the pre-built labs and would not be able to directly upload or download files to the lab. However, this could be a quick way to get access to an environment for testing and best of all, it is 100% free. As you can see, there are many options for a home lab and it really just depends on your goals and what you are trying to accomplish.

Ravello says hello to Nested ESXi


Today, we have a new player entering the off-premises (hosted) option for running vSphere based home labs. I am please to announce that Ravello, a startup that uses Nested Virtualization to target dev/test workloads has just introduced beta support for running Nested ESXi on their platform. I have written about Ravello in the past and you can find more details here. Ravello uses their own home grown KVM-based nested hypervisor called HVX which runs on top of a VM provisioned by either Amazon EC2 or Google Compute Engine. As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

Over the last month, I have had the privilege of getting early access to the Ravello platform with the Nested ESXi capability and have been providing early feedback to their R&D team to ensure the best possible user experience for customers looking to run Nested ESXi on their platform. I have also spent quite a bit of time working out the proper workflow for getting Nested ESXi running and being able to quickly scale up the number of nodes, especially useful when testing new features like VSAN 6.0. I have also been working with their team to develop a script that will allow users to quickly spin up as many Nested ESXi VMs as needed after a one time initial preparation. This will greatly simplify deployments of more than a couple of Nested ESXi VMs. Hopefully I will be able to share more details about the script in the very near future.

Before jumping into the instructions on getting Nested ESXi running on the Ravello platform, I also wanted to quickly highlight what is currently supported from a vSphere perspective as well as some of the current limitations and caveats regarding Nested ESXi that you should be aware of. Lastly, I have also provided some details around pricing so the proper expectations is set if you are considering a vSphere home lab on Ravello. You can find more information in the next few sections else you can go straight to the setup instructions.

Supports:


  • vCenter Server 5.x (Windows) & VCSA 5.x
  • vCenter Server 6.0 (Windows)
  • ESXi 5.x
  • ESXi 6.0

Caveats:


Coming from a pure vSphere background, I have enjoyed many of the simplicities that VMware has built into their core platform such as support for OVF capabilities like Dynamic Disks and Deployment Options for example. While using the Ravello platform I came across several limitations with respect to Nested ESXi and the VCSA. Below is just a quick list of the caveats that I have found while testing the platform and I have been told that many of these are being looked at and hopefully will be resolved in the future. Nonetheless, I still wanted to make sure these were called out so that you go in with the right expectations.

  • There is currently no support for virtuallyGhetto's Nested ESXi /VSAN VM OVF Templates (though you can import the OVFs, most of the configurations are lost)
  • There is currently no support for VM Advanced Settings such as marking a VMDK as an SSD or enabling UUID for disks for example (configurations are not preserved through import)
  • There is currently no support for VCSA 6.0 OVA due to disk controller limitation + no OVF property support, you will need to use Windows based vCenter Server for now (VCSA 5.5 is supported)
  • There is currently no OVF property support
  • There is currently no support for VMXNET3 for Nested ESXi VM, e1000 must be used due to a known network bug
  • Running Nested SMP-FT is not supported as 10Gbit vNICs are required and VMXNET3 is not currently supported

Pricing:


When publishing your Ravello Application, you have the option selecting two different deployment optimization. The first is optimized for cost, if TCO is what you care most about, then the platform will automatically select the cloud provider (EC2 or GCE) that is the cheapest to satisfy the requirements. The second option is to optimize based on performance and if selected, you can choose to place your application on either EC2 or GCE. In both of cases, you will be provided with an estimated cost which is broken down to compute, storage, networking as well as a final cost (per hour). Once you agree to the terms, you can then click on the "publish" button which will then deploy your workload onto the selected cloud provider.

Here is a screenshot summary view of a Ravello Application which I have built that consists of 65 VMs (1 Windows VM for vCenter Server) and 64 Nested ESXi VMs and I chose to optimize based on cost. The total price would be $17.894/hr

ravello-vghetto-nested-esxi-vsan-6.0-64-Node-cost-optmized
Note: Prices as of 04/05/2015

I also went through an exercise of going through several more configurations to give you an idea of what the cost could be for varying sized environments. Below is a table for a 3 Node, 32 Node & 64 Node VSAN setup (includes one additional VM for the vCenter Server).

# of VM Optimization Hosting Platform Compute Cost Storage Cost Network Cost Public IP Cost Total Price
4 Cost N/A $1.09/hr $0.0292/hr $0.15/GB $0.01/hr $1.1292/hr
4 Performance Amazon $1.62/hr $0.0292/hr $0.15/GB $0.01/hr $1.6592/hr
4 Performance Google $1.38/hr $0.0292/hr $0.15/GB $0.01/hr $1.4192/hr
33 Cost N/A $8.92/hr $0.1693/hr $0.15/GB $0.01/hr $9.0993/hr
33 Performance Amazon $13.22/hr $0.1693/hr $0.15/GB $0.01/hr $13.3993/hr
33 Performance Google $11.24/hr $0.1693/hr $0.15/GB $0.01/hr $11.4193/hr
65 Cost N/A $17.56/hr $0.324/hr $0.15/GB $0.01/hr $17.894/hr
65 Performance Amazon $26.02/hr $0.324/hr $0.15/GB $0.01/hr $26.354/hr
65 Performance Google $22.12/hr $0.324/hr $0.15/GB $0.01/hr $22.454/hr

How to Setup:


Here is the process for setting up Nested ESXi on the Ravello platform. The process consists of installing a single Nested ESXi VM and "preparing" it so that it can then be used later to deploy additional unique Nested ESXi instances from the Ravello Library.

Step 1 - Upload either an ESXi 5.x or 6.0 ISO to the Library using the Ravello VM Uploader tool which you will be prompted to install.

Screen Shot 2015-04-08 at 8.43.14 PM
Step 2 - Deploy the empty Ravello ESXi VM Template from the Library which has already been prepared with the required CPU ID

<ns1:cpuIds value="0000000768747541444d416369746e65" index="f00d”/>

Adding the above CPU ID will enable the emulation of Intel VT-x/AMD-V. If you decide to create your own Ravello VM Template, you will need to perform this operation yourself which is only available today via their REST API today, you can find more details here.

Step 3 - Add a CD-ROM device to the Nested ESXi VM by highlighting the ESXi VM and under "Disks" (yes, this was not intuitive for me either)

Screen Shot 2015-04-08 at 8.48.40 PM
Once you have added the CD-ROM, you will want to mount the ESXi ISO.

Step 4 - Power on the Nested ESXi VM and perform a regular installation of ESXi as you normally would.

At this point, you have now successfully installed Nested ESXi on Ravello! The next series of step is to "prepare" this ESXi image so that it can be duplicated (cloned) to deploy additional instances without causing conflicts, else you would have to perform this step N-number of times for additional nodes which I am sure many of you would not want to do. The steps outlined here will be following the process which I have documented in my How to properly clone a Nested ESXi VM? article.

Step 5 - Login to the console of ESXi VM and run the following ESXCLI command:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

Note: If you wish to connect to the ESXi VM directly for ease of use versus going through the remote console. You can go to "Services" tab for the VM and enable external access as seen in the screenshot below.

ravello-networking
Step 6 - Edit /etc/vmware/esx.conf and remove the uuid entry and then run /sbin/auto-backup.sh to ensure the changes have been saved.

At this point, you have prepared a vanilla Nested ESXi VM. You can save this image into the Ravello Library and you can deploy additional instances and by default Ravello platform is set for DHCP. You can of course change it to DHCP reservations so you get a particular IP Address or specifying a static IP Address assignment.

If you wish to prepare the Nested ESXi VM for use with VSAN, then you will need to run through these additional steps:

  • Create a claim rule to mark the 4GB VMDK as SSD
  • Enable VSAN traffic type on vmk0

Step 7 - I have also enabled remote logging as well as suppress any shell warnings and you just need to run the snippet below within the ESXi Shell

DEVICE=$(esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )' | grep -B1 4096 | grep mpx | awk -F '(' '{print $2}' | sed 's/)//g');esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d $DEVICE -o enable_ssd;esxcli storage core claiming reclaim -d $DEVICE;esxcli vsan network ipv4 add -i vmk0;esxcli system syslog config set --loghost=10.0.0.100;esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

Step 8 -

If you wish to setup 32 Nodes with VSAN 1.0, then you will need to run this additional command:

esxcli system settings advanced set -o /CMMDS/goto11 -i 1

If you wish to setup 64 Nodes with VSAN 6.0, then you will need to run this additional command:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

At this point, you have completed preparing your Nested ESXi VM. You can now save your image to the Ravello Library and once that has been done, you can now easily clone additional Nested ESXi instances by simply dragging/dropping into your canvas from the Ravello Library. For vCenter Server, if you are setting up a vSphere 5.x environment you will need to upload the VCSA and go through the normal configuration using the VAMI UI. For vCenter Server 6.0, you will not be able to use the VCSA 6.0 because there is a limitation in the platform today that does not support it. At this time, you will need to deploy and install a Windows VM and then install the vCenter Server 6.0 installation.

I of course had some fun with the Ravello platform and below are some screenshots of running both a 32 Node VSAN Cluster (vSphere 5.5) as well as a 64 Node VSAN Cluster (vSphere 6.0) Overall, I thought it was a pretty good experience. There were definitely some sluggishness while installing vCenter Server bits and navigating through the vSphere Web Client. It took a little over 40min which was almost double the amount of time that I have seen in my home lab. I was told that VNC might perform better than RDP, though RDP is what Ravello folks recommend for connecting to a Windows based desktop. It is great to see another option for running vSphere home labs and I think the performance is probably acceptable for most people and hopefully it will continue to improve in the future. I definitely recommend giving Ravello a try and who knows, it might be the platform of choice for your vSphere home lab.

Nested ESXi 5.5 running 32 Node VSAN Cluster:

vghetto-nested-esxi-5.5-32-node-cluster-ravello-1

vghetto-nested-esxi-5.5-32-node-cluster-ravello-0

Nested ESXi 6.0 running 64 Node VSAN Cluster:

vghetto-nested-esxi-64-node-cluster-ravello-1

vghetto-nested-esxi-64-node-cluster-ravello-0

Filed Under: ESXi, Home Lab, Nested Virtualization, vSphere Tagged With: homelab, intel vt, nested, nested virtualization, ravello

Home Labs made easier with VSAN 6.0 + USB Disks

03/04/2015 by William Lam 23 Comments

VSAN 6.0 includes a large number of new enhancements and capabilities that I am sure many of you are excited to try out in your lab. One of the challenges with running VSAN in a home lab environment (non-Nested ESXi) is trying to find a platform that is both functional and cost effective. Some of the most popular platforms that I have seen customers use for running VSAN in their home labs are the Intel NUC and the Apple Mac Mini. Putting aside the memory constraints in these platforms, the number of internal disk slots for a disk drive is usually limited to two. This would give you just enough to meet the minimal requirement for VSAN by having at least a single SSD and MD.

If you wanted to scale up and add additional drives for either capacity purposes or testing out a new configurations, you are pretty much out of luck, right? Well, not necessary. During the development of VSAN 6.0, I came across a cool little nugget from one of the VSAN Engineers where USB-based disks could be claimed by VSAN which could be quite helpful for testing in a lab environment, especially using the hardware platforms that I mentioned earlier.

For a VSAN home lab, using cheap consumer USB-based disks which you can purchase several TB's for less than a hundred dollars or so and along with USB 3.0 connectivity is a pretty cost effective way to enhance hardware platforms like the Apple Mac Mini and Intel NUCs.

Disclaimer: This is not officially supported by VMware and should not be used in Production or evaluation of VSAN, especially when it comes to performance or expected behavior as this is now how the product works. Please use supported hardware found on the VMware VSAN HCL for official testing or evaluations.

Below are the instructions on how to enable USB-based disks to be claimable by VSAN.

Step 1 - Disable the USB Arbitrator service so that USB devices can been seen by the ESXi host by running the following two commands in the ESXi Shell:

/etc/init.d/usbarbitrator stop
chkconfig usbarbitrator off

vsan-usb-disk-1
Step 2 - Enable the following ESXi Advanced Setting (/VSAN/AllowUsbDisks) to allow USB disks to be claimed by VSAN by running the following command in the ESXi Shell:

esxcli system settings advanced set -o /VSAN/AllowUsbDisks -i 1

vsan-usb-disk-2
Step 3 - Connect your USB-based disks to your ESXi host (this can actually be done prior) and you can verify that they are seen by running the following command in the ESXi Shell:

vdq -q

vsan-usb-disk-3
Step 4 - If you are bootstrapping vCenter Server onto the VSAN Datastore, then you can create a VSAN Cluster by running "esxcli vsan cluster new" and then contribute the storage by adding the SSD device and the respective USB-based disks using the information from the previous step in the ESXi Shell:

esxcli vsan storage add -s t10.ATA_____Corsair_Force_GT________________________12136500000013420576 -d mpx.vmhba32:C0:T0:L0 -d mpx.vmhba33:C0:T0:L0 -d mpx.vmhba34:C0:T0:L0 -d mpx.vmhba40:C0:T0:L0

vsan-usb-disk-4
If we take a look a the VSAN configurations in the vSphere Web Client, we can see that we now have 4 USB-based disks contributing storage to the VSAN Disk Group. In this particular configuration, I was using my Mac Mini which has 4 x USB 3.0 devices that are connected and providing the "MD" disks and one of the internal drives that has an SSD. Ideally, you would probably want to boot ESXi from a USB device and then claim one of the internal drives along with 3 other USB devices for the most optimal configuration.

vsan-usb-disk-5
As a bonus, there is one other nugget that I discovered while testing out the USB-based disks for VSAN 6.0 which is another hidden option to support iSCSI based disks with VSAN. You will need to enable the option called /VSAN/AllowISCSIDisks using the same method as enabling USB-based disk option. This is not something I have personally tested, so YMMV but I suspect it will allow VSAN to claim an iSCSI device that has been connected to an ESXi host and allow it to contribute to a VSAN Disk Group as another way of providing additional capacity to VSAN with platforms that have restricted number of disk slots. Remember, neither of these solutions should be used beyond home labs and they are not officially supported by VMware, so do not bother trying to do anything fancy or running performance tests, you are just going to let your self down and not see the full potential of VSAN 🙂

Filed Under: Apple, ESXCLI, ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tagged With: AllowISCSIDisks, AllowUsbDisks, apple, esxcli, mac mini, usb, Virtual SAN, VSAN, vSphere 6.0

VSAN

Here is a consolidated page of all VSAN related articles on virtuallyGhetto

VSAN 6.6

  • Native VCSA bootstrap installer in vSAN 6.6
  • Easily try out vSAN 6.6 Encryption feature using KMIP Docker Container
  • New vSAN Management 6.6 API / SDKs / CLIs
  • SMART drive data now available using vSAN Management 6.6 API
  • Getting started w/the new PowerCLI 6.5.1 Get-VsanView cmdlet
  • Automating the new native VCSA bootstrap “Easy Install” in vSAN 6.6
  • Managing & silencing vSAN Health Checks using PowerCLI
  • Correlating vSAN perf metrics from vSphere Web Client to both PowerCLI & vSAN Mgmt API
  • How to move vSAN Datastore into a Folder?
  • How to convert vSAN RVC commands into PowerCLI and/or other vSphere SDKs?
  • How to check when the VSAN Hardware Compatibility List (HCL) is updated?

VSAN 6.2

  • VSAN 6.2 extends vSphere API to include new VSAN Management APIs
  • Quick Tip – VSAN 6.2 (vSphere 6.0 Update 2) now supports creating all-flash diskgroup using ESXCLI
  • VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC
  • Getting started with the new VSAN 6.2 Management API
  • VSAN Nested ESXi 6.x Virtual Appliance
  • ESXi on the new Intel NUC Skull Canyon
  • VSAN Management 6.2 API Quick Reference
  • Heads Up: OVF/OVA always deployed as Thick on VSAN when using vSphere Web Client
  • How to tell if an ESXi host is a VSAN Witness Virtual Appliance programmatically?

VSAN 6.1

  • How to deploy and run the VSAN 6.1 Witness Virtual Appliance on VMware Fusion & Workstation?
  • Override default VSAN Maintenance (decommission) Mode in VSAN 6.1
  • Automating full configuration of a VSAN Stretched Cluster using RVC
  • VSAN Nested ESXi 6.x Virtual Appliance

VSAN 6.0

  • Updated VSAN 6.0 Nested ESXi OVF Templates for 64 Nodes, All-Flash Array & Fault Domain Testing
  • How to configure an All-Flash VSAN 6.0 Configuration using Nested ESXi?
  • New vSphere 6.0 APIs for VSAN, VVOLs, NFS v4.1 & more!
  • Home Labs made easier with VSAN 6.0 + USB Disks
  • New VOBs for creating vCenter Server alarms in vSphere 6.0
  • How to download offline VSAN HCL file for VSAN Health Check Plugin?

VSAN 1.0

  • How to bootstrap vCenter Server onto a single VSAN node Part 1?
  • How to bootstrap vCenter Server onto a single VSAN node Part 2?
  • How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi
  • A killer custom Apple Mac Mini setup running VSAN
  • Automate VSAN Observer offline mode configurations
  • How to move a VSAN Cluster from one vCenter Server to another?
  • Does VSAN work with Free ESXi?
  • ESXi 5.5 Kickstart script for setting up VSAN
  • Does reinstalling ESXi with an existing VSAN Datastore wipe your data?
  • Quick Tip – Steps to shutdown/startup VSAN Cluster w/vCenter running on VSAN Datastore
  • Quick stats for the VSAN HCL
  • “Community” VSAN Storage Controller Queue Depth List
  • VMware VSAN APIs
  • Exploring VSAN APIs Part 10 – VSAN Disk Health
  • Exploring VSAN APIs Part 9 – VSAN Component count
  • Exploring VSAN APIs Part 8 – Maintenance Mode
  • Exploring VSAN APIs Part 7 – VSAN Datastore Folder Management
  • Exploring VSAN APIs Part 6 – Modifying Virtual Machine VM Storage Policy
  • Exploring VSAN APIs Part 5 – VSAN Host Status
  • Exploring VSAN APIs Part 4 – VSAN Disk Mappings
  • Exploring VSAN APIs Part 3 – Enable VSAN Traffic Type
  • Exploring VSAN APIs Part 2 – Query available SSDs
  • Exploring VSAN APIs Part 1 – Enable VSAN Cluster
  • Restoring VSAN VM Storage Policies without vCenter Part 1: Using cmmds-tool
  • Restoring VSAN VM Storage Policy without vCenter Part 2: Using vSphere API
  • Extending VSAN capabilities in the vSphere Web Client using vCO
  • How to run the VSAN Observer in “collection” mode in the background?
  • VSAN Flash/MD capacity reporting
  • Handy VSAN VOBs for creating vCenter Alarms
  • OVF template for creating Nested ESXi 3 or 32 node VSAN Cluster
  • How to automatically monitor VSAN Component threshold using a vCenter Alarm?
  • VSAN vCheck Plugins
  • VSAN Configuration Maximum Query Script
  • Quick Tip – Increasing capacity on a Nested VSAN Datastore
  • Re: Host is in a VSAN enabled cluster but does not have VSAN service enabled
  • How to bootstrap Horizon View 5.3.1 onto a VSAN Datastore using VCT
  • Required ESXi advanced setting to support 16+ node VSAN Cluster
  • Why you should rename the default VSAN Datastore name
  • vdq – A useful little VSAN utility
  • How to upgrade to the latest VSAN Beta Refresh of RVC on Windows?
  • How to run Nested ESXi on top of a VSAN datastore?
  • Additional steps required to completely disable VSAN on ESXi host
  • Why is my VSAN Component maximum showing less than 3000?

Homelab considerations for vSphere 7

03/30/2020 by William Lam 98 Comments

With the vSphere 7 Launch Event just a few days away, I know many of you are eager to get your hands on this latest release of vSphere and start playing with it in you homelab. A number of folks in the VMware community have already started covering some of the amazing capabilities that will be introduced in vSphere and vSAN 7 and I expect to see that ramp up even more in the coming weeks.

One area that I have not seen much coverage on is around homelab usage with vSphere 7. Given this is a pretty significant release, I think there are some things you should be aware of before you rush out and immediately upgrade your existing homelab environment. As with any vSphere release, you should always carefully review the release notes when they are made available and verify the hardware and its underlying components are officially on the VMware HCL, this is the only way to ensure that you will have a good and working experience.

Having said that, here are just a few of the observations that I have made while running pre-GA builds of vSphere 7 in my own personal homelab. This is not an exhaustive list and I will try to update this article as more information is made available.

Disclaimer: The following considerations below is based on my own personal homelab experience using a pre-GA build of vSphere 7 and it does not reflect any official support or guidance from VMware. Please use these recommendation at your own risk.

[Read more...] about Homelab considerations for vSphere 7

Filed Under: Home Lab, vSphere 7.0 Tagged With: ESXi 7.0, homelab, Intel NUC, Supermicro, usb network adapter, vmklinux, vSphere 7

USB Native Driver Fling for ESXi adds support for Multi-Gig (1G/2.5G/5G) Adapter

09/27/2019 by William Lam 9 Comments

Today, we have an exciting update to give on our USB Network Native Driver for ESXi Fling which has had two updates since releasing earlier this year and has been extremely well received by the VMware community. As many of you know, I am always on the look out for new and innovative tech that can help enable our customers, especially when it comes to building home labs to learn about the latest and greatest VMware software.

UPDATE (06/08/20) - QNAP has just published the updated firmware for their QNA-UC5G1T USB NIC which resolves some of the performance issue observed with the initial release.

Several months back, I came to learn about a really cool USB-based Multi-Gigabit Network Adapter (QNA-UC5G1T) from QNAP which can negotiate with speeds up to 1Gbps, 2.5Gbps and 5Gbps. I was not familiar with the multi-gig specification but it it looks like it was created as a standard back in 2016 as IEEE 802.3bz. This initially evolved from advancements in wireless technology but more recently it started to make its way into ethernet-based devices.

Although this particular device is from QNAP, the underlying chipset is actually from Aquantia, now part of Marvell. If the name sounds familiar, it should as Aquantia is also the vendor to Apple for their 10GbE NICs in both the 2018 Mac Mini and new iMac Pros. In fact, their chipsets are also used in a number of Thunderbolt 3 to 10GbE NICs which also works with ESXi. Access to 10GbE is certainly more common these days but it certainly is not for everyone and not all platforms can be expanded to support it.


The QNA-UC5G1T device is not only small but because it is USB-based, you are more likely to have spare USB ports on your system than say a traditional PCIe slot or Thunderbolt 3 port. From a cost standpoint, this device is about half the cost of the 10GbE Thunderbolt adapter coming in at $79 USD and can be ordered from Amazon. As far as I know, QNAP is the only vendor who has produced a multi-gig USB adapter, but perhaps in the future, there will be other vendors.

[Read more...] about USB Native Driver Fling for ESXi adds support for Multi-Gig (1G/2.5G/5G) Adapter

Filed Under: ESXi, Home Lab, Not Supported, vSphere Tagged With: 2.5GbE, 5GbE, Aquantia, esxi 6.5, esxi 6.7, multi-gig, native device driver, QNAP, usb ethernet adapter, usb network adapter

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03/14/2016 by William Lam 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Filed Under: ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tagged With: esxi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 8
  • Go to page 9
  • Go to page 10

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy