• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

vhv

Will Intel’s VMCS Shadowing Feature Benefit VMware’s Nested Virtualization?

06/18/2013 by William Lam 1 Comment

For many years now, VMware customers have been using Nested Virtualization, which is the ability to run a hypervisor such as vSphere ESXi within a virtual machine. Even though Nested Virtualization is not officially supported by VMware, customers have come to rely upon this technology for their lab environments and sometimes even production environments. VMware also heavily relies on this technology for their own internal development as well as their Hands On Lab for VMworld, which is now offered as an online SaaS (Software-as-a-Service) solution called Hands On lab Online.

Performance of Nested Virtualization has come a long way since its first introduction and it continues to get better with advancements made in hardware from both Intel and AMD. A couple of months back, I came across an article discussing a new feature from the upcoming Intel Haswell processor’s called VMCS Shadowing which aims to improve the performance of Nested Virtualization. This got me thinking about whether VMCS Shadowing could benefit VMware’s Nested Virtualization.

VMCS (Virtual Machine Control Structure) Shadowing works by reducing the frequency in which the guest VMM (virtual machine) requires assistance from the parent VMM. Its goal is to eliminate the VM-exits due to VMREAD and VMWRITE instructions executed by the guest hypervisor but this comes at a slight expense.

I reached out to one of the core engineers who helped to develop VMware’s Nested Virtualization technology, Jim Mattson, and asked whether or not we would benefit from the VMCS Shadowing feature. Well, it turns out that VMCS Shadowing can help, but we have also done some research in this area and developed some technology that would allow us to eliminate about 75% due to VMREAD and VMWRITE when running guest VMware Hypervisors using some interesting software techniques. The details of these software techniques are actually published in a research paper called Software Techniques for Avoiding Hardware Virtualization Exits on VMware’s Academic Program which is part of VMware Labs. Jim is one of the authors of the research paper and I would highly recommend you check it out if you are interested in more details.

To summarize, because of the techniques described in the paper, VMCS Shadowing will provide only a small benefit when running a VMware Hypervisor as virtual machine. However, it will greatly benefit other non-VMware Hypervisors running as a virtual machine, this is particular true for Hypervisors that perform egregious number of VMREAD and VMWRITE operations and that do not cluster well, such as VirtualBox for example.

The coolest part about the research and software techniques developed by Jim and team, is that the technology has already been incorporated into the existing VMware vSphere ESXi, Workstation and Fusion products. I often times forget that all the awesome-sauce technology that is being developed by VMware starts out in research academia and you can learn about other research topics by visiting the VMware’s Academic Program which includes publications, research papers and the popular VMware Technical Journals.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: AMD, Intel, nested, nested virtualization, vhv, VMCS, vmware

How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director

05/06/2013 by William Lam 9 Comments

Recently I had received several inquiries asking on how to configure nested ESXi (Nested Virtualization) to function in a VXLAN environment. I have written several articles in the past on configuring nested ESXi in a regular vSphere and vCloud Director environment, but with the use of a VXLAN backed network, there are a few additional steps that are required. These steps include additional configurations of the vCloud Network & Security Manager (previously known as vShield Manager) which ensures that both the required promiscuous mode and forged transmits are automatically enabled for the VXLAN virtual wires (vWires) as they are managed exclusive by the vCNS Manager.

In this article, I will walk you through the configurations that is required when using VXLAN in both a vSphere only environment as well as a vCloud Director environment. If you would like to learn more about how VXLAN works, be sure to check out the multi-part VXLAN series (Part 1/Part 2) by Venky Deshpande.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Configurations for VXLAN in vSphere Environment

Step 1 - Deploy vCNS Manager and configure it to point to your vCenter Server (do not enable or prepare VXLAN, this must be done after the configurations)

Step 2 - You will need to identify the VDS MoRef ID in your vCenter Server which will be used in the next step. Since the configuration is applied at the VDS level, you may want to consider having a separate VDS serving Nested Virtualization traffic since both promiscuous mode & forged transmits will automatically be enabled for all vWires. To locate the VDS MoRef ID, login to the vSphere Web Client and select the summary view for the VDS.

The VDS MoRef ID will be towards the end of the URL link and it should start with dvs-X where X is some arbitrary number. Record this value down for the next step

Step 3 - Download the enablePromForVDS.sh shell script which will be used to prepare the VDS within the vCNS Manager. The script basically performs a POST to the REST API to the vCNS Manager using cURL and it accepts three input parameters: vCNS Manager IP Address/Hostname, VDS MoRef ID and VDS MTU. The username/password is hard coded in the script to use the default which is admin/default. If you have modified the default password like any good admin, you will want to change the password before running the script. If you take a look at the request body, you will notice only promiscuous mode is enabled to true, but this will also automatically enable forged transmits as well.

In my lab enviroment, I have the vCNS Manager IP to be 172.30.0.196, VDS MoRef ID to be dvs-13 and VDS MTU to be 9000. So the syntax to run the script would be:

./enablePromForVDS.sh 172.30.0.196 dvs-13 9000

Here is a screenshot of executing the script, you should see a response back with 200 to indicate successful execution of the script.

Step 4 - Now, we will proceed with the VXLAN preparation. Start off by logging into the vCNS Manager and selecting the vSphere Datacenter which you wish to enable VXLAN. On the right you should see a tab called "Network Virtualization" go ahead and click on that and then click on the sub-tab called "Preparation". Click on edit and then select the vSphere Cluster and proceed through the wizard based on your environment configuration.

Step 5 - Once the VXLAN preparation has completed, click on the "Segment ID" and configure that based on your environment.

Step 6 - Next, click on "Network Scopes" and you will create a network scope and specify the set of vSphere Clusters the VXLAN network will span.

Step 7 - Lastly, click on "Networks" and this is where you will create your vWires and ensure it the proper network scope is selected.

Step 8 - To confirm that everything has been configured properly. We now log back into our vSphere Web Client and heading over to the VDS settings page. You should now see a new vWire portgroup that is created, if we take a look at it's settings we should see that both promiscuous mode and forged transmits is enabled.

You are now done with the VXLAN configurations in the vCNS Manager and can proceed to the regular instructions for enabling Nested ESXi for vSphere.

Note: If you have already prepared VXLAN in your environment, you can still configure the above without having to un-prepare your VXLAN configurations. You just need to login to the vCNS Manager via the REST API and perform a DELETE on the VDS switch (Please refer to page 153 of the vCNS API Programming Guide) which will just delete the mapping from vCNS but will not destroy any of your VDS configuration. Once that is done, you will be able to use the script to configure the VDS with the proper settings.

Configurations for VXLAN in vCloud Director Environment

A VXLAN network pool is automatically created for you when using vCloud Director 5.1, so the steps for preparing Nested Virtualization for vCloud Director is extremely simple compared to the vSphere only environment.

Note: VXLAN is only supported in vCloud Director 5.1, for previous versions you have the choice of using a VCD-NI or vSphere backed network and the configurations for that can be found here.

Step 1 - Please follow the steps 1-5 from above in the vSphere only environment and then you are done. If you would like a more detailed walk through for configuring VXLAN for a vCloud Director environment, check out this article by Rawlinson Rivera who takes you through the process step by step.

Step 2 - Proceed to the regular instructions for enabling Nested ESXi for vCloud Director.

Step 3 - Lastly, you will go through the vCloud Director setup which is to attach your vCenter Server & vCNS Manager, create a Provider VDC, create an Organization and assign resources to your Organization VDC and ensure that the OrgVDC is consuming the VXLAN network pool that is automatically created for you when you create the Provider VDC. Once that is done, when you deploy your vApp, you will see a vWire that automatically created for you. If we login to the vSphere Web Client and go to the VDS settings, you will see the vWire has both promiscuous mode and forged transmits automatically enabled.

Additional Resources:

  • Nested Virtualization Resources
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Nested Virtualization, NSX Tagged With: nested, vcloud director 5.1, vcloud networking and security, vcns, vhv, vSphere 5.1, VXLAN

Nested Virtualization APIs For vSphere & vCloud Director 5.1

10/18/2012 by William Lam 2 Comments

Did you know with the release of vSphere 5.1 and vCloud Director 5.1, there are now APIs that allow you to enable/disable Nested Virtualization aka (Virtual Hardware Virtualization) on a Virtual Machine?

Disclaimer: Nested Virtualization is not officially supported by VMware.

In the vSphere 5.1 API, there are two new properties related to Nested Virtualization:

  • nestedHVSupported is a new capability property which indicates whether your physical ESXi 5.1 host supports Nested Virtualization. This property is only true IF your physical CPU supports BOTH Intel-VT/EPT OR AMD-V/RVI. For more details, please refer to this article.
  • nestedHVEnabled is new property on a Virtual Machine which allows you to enable or disable Nested Virtualization. You will need to ensure you are running a Virtual Machine with ESXi 5.1 Compatibility (e.g. virtual hardware 9) and your physical ESXi 5.1 host supports Nested Virtualization.

Here is a screenshot of performing the same operation manually using the new vSphere Web Client:

In the vCloud Director 5.1 REST API, there two new operations related to Nested Virtualization:

  • /action/enableNestedHypervisor is a new operation on a Virtual Machine that enables Nested Virtualization. Here is an example using the curl utility enable VHV:

    curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: xX4IrWWi+Ofq77zOqPJaMEYHHJt4jxrwP+ntkO2tecQ=" -X POST https://vcd.primp-industries.com/api/vApp/vm-d9870545-a29a-4175-bff4-ae075f1c1bc0/action/enableNestedHypervisor

  •  /action/disableNestedHypervisor is a new operation on a Virtual Machine that disables Nested Virtualization. Here is an example using the curl utility to disable VHV:

    curl -i -k -H "Accept:application/*+xml;version=5.1" -H "x-vcloud-authorization: xX4IrWWi+Ofq77zOqPJaMEYHHJt4jxrwP+ntkO2tecQ=" -X POST https://vcd.primp-industries.com/api/vApp/vm-d9870545-a29a-4175-bff4-ae075f1c1bc0/action/disableNestedHypervisor

Here is a screenshot of performing the same operation manually using the vCloud Director UI:

If you plan to leverage Nested Virtualization in your environment, you now have simple way of automating this feature for Virtual Machines that you wish to support Nested Virtualization.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: api, esxi5.1, nested, REST API, vcloud director 5.1, vesxi, vhv, vSphere 5.1

Nested Virtualization Resources

10/04/2012 by William Lam 7 Comments

Here is a consolidated page on all the articles that I have written about the Nested Virtualizatoin (nested ESXi, Hyper-V, etc) and all the goodies that are "Not Supported".

vSphere / vCloud 5.1

  • Having Difficulties Enabling Nested ESXi in vSphere 5.1?
  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

vSphere / vCloud 5.0

  • How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud

Additional Info/Tips/Tricks/

  • Nested ESXi 5.1 Supports VMXNET3 Network Adapter Type
  • How to Configure Nested ESXi 5 to Support EVC Clusters
  • How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5
  • How to Install VMware VSA in Nested ESXi 5 Host Using the GUI
  • Cool Undocumented Features in vCloud Director 1.5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud
  • Nested Virtualization APIs For vSphere & vCloud Director 5.1
  • How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director 
  • Will Intel’s VMCS Shadowing Feature Benefit VMware’s Nested Virtualization?
  • How to run Nested RHEV Hypervisor on ESXi? 
  • How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi
  • How to run Nested ESXi on top of a VSAN datastore? 
  • VMware Tools for Nested ESXi 
  • Why is Promiscuous Mode & Forged Transmits required for Nested ESXi?
  • How to properly clone a Nested ESXi VM?
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: amd-v, ept, esxi, esxi 5, esxi4, esxi4.1, esxi5.1, hyper-v, intel vt, nested, rvi, vhv, virtual hardware virtualization, vSphere, vSphere 4, vSphere 5, vSphere 5.1

Having Difficulties Enabling Nested ESXi in vSphere 5.1?

09/29/2012 by William Lam 21 Comments

I noticed there were a few folks having some difficulties enabling Nested ESXi (VHV Virtual Hardware Virtualization) in the latest release of ESXi 5.1 and I thought I share some additional info and tips on troubleshooting your setup in case you are running into similar problems.

*** DISCLAIMER **** This is not officially supported by VMware, do not bother asking if it is supported or calling into VMware support for details or help.

If you wish to run nested ESXi or other hypervisors on ESXi 5.1 and run 32-bit nested virtual machines, you must meet the following hardware requirement:

  • CPU supporting Intel VT-x or AMD-V

If you wish to run nested 64-bit virtual machines in your nested ESXi or other hypervisors, in addition to the requirement above, you must also meet the following hardware requirement:

  • CPU supporting Intel EPT or AMD RVI

If you only meet the first criteria, you CAN still install nested ESXi or other hypervisors on ESXi 5.1, BUT you will only be able to run 32-bit nested virtual machines. When you create your virtual machine shell using the new vSphere Web Client, in the expanded CPU view, the "Hardware Virtualization" box will be grayed out. This is expected as you do not have full support for VHV, but you can still continue with your installation of ESXi or other hypervisors.

In ESXi 5.0, you may have been able to run 64-bit nested virtual machines without EPT/RVI support but performance was extremely poor. With ESXi 5.1, VHV now requires EPT/RVI.

Note: During the installation of ESXi, you may see the following message "No Hardware Virtualization Support", you can just ignore it.

If you are using sites such as Intel's ark.intel.com to check your CPU requirements, be aware that it is COMMON even for the hardware vendors to publish incorrect information about their websites. However, there is a quick way you can validate on your ESXi host whether you have full VHV support.

In vSphere 5.1, there is a new capability property called nestedHVSupported which specifies whether your physical ESXi 5.1 host has full VHV support. This property will only be true IF your CPU has both Intel-VT+EPT or AMD-V+RVI. A quick and easy way to validate this is using the vSphere MOB to retrieve the value.

To check nestedHVSupported property, please enter the following into a web browser (substitute the IP Address/hostname of your ESXi host):

https://himalaya.primp-industries.com/mob/?moid=ha-host&doPath=capability

After you login, search for the nestedHVSupported property on the page and you should see a value of either true or false. As mentioned earlier, if it is false, you might still be able to install nested ESXi or other hypervisors but you will not be able to run nested 64-bit virtual machines. I would also recommend taking a look at your system BIOS to ensure things like Intel-VT/EPT and AMD-V/RVI are enabled and sometimes it might just be as simple as a BIOS upgrade (you can always confirm by contacting the hardware vendor if you have further questions).

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.

Additional Resources: 

  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi5.1, hyper-v, nested, vcd, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

vInception #NotSupported Slides Posted

09/10/2012 by William Lam 4 Comments

I was pinged by a few folks asking if my #NotSupported session that I presented at VMworld US would be available online, so here is the slide deck to my vInception presentation.

I would also like to thank everyone that attended my session! I had a lot of fun and hopefully you did too!  

UPDATE: I just realized the livestream recording videos are online, but they are not very clear. Apologies for that. I heard the better records from the vBrownbag crew should be up shortly, so once those are up, I will replace them on the site.

Part 1:

Watch live streaming video from vmwarecommunitytv at livestream.com

Part 2:

vmwarecommunitytv on livestream.com. Broadcast Live Free
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi, nested, nested ft, notsupported, vcloud director, vhv, vinception, vSphere

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy