• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

hyper-v

vCenter Host Gateway … more than meets the eye

04/10/2015 by William Lam 9 Comments

While going through the download motion like many of you when vSphere 6.0 was generally available, something that caught my eye in the vCenter Server download area was something called the vCenter Host Gateway (vCHG) virtual appliance. At first, I did not know what that was and until I spoke to a few colleagues did I realize that vCHG is the evolution of the Multi-Hypervisor Management (MHM) Plugin which provides vSphere Administrators a way to natively manage Hyper-V hosts within the vCenter Server UI. MHM was originally released as a Fling and later then productized within the vCenter Server product. At the time, it made sense for the plugin to be Windows based as it needed to connect to Hyper-V which obviously ran on Microsoft Windows.

It looks like the folks over in the MHM team have been quite busy as they have gotten rid of the Windows installer and have now provided a Virtual Appliance which uses winrm to directly communicate to the Hyper-V hosts. In addition, you can now manage Hyper-V hosts within the vSphere Web Client where as previously it was only available using the vSphere C# Client. vCHG works with both vCenter Server for Windows as well as the vCenter Server Appliance, there are no additional Windows host required for this new solution. Deploying and configuring vCHG is relatively straight forward and you can find all the instructions here. There were a few minor gotchas that I ran into and I thought it would be worth sharing, especially figuring out what was needed on the Hyper-V hosts which was mainly due to my lack of familiarity with winrm.

You have the option of configuring winrm to go over standard HTTP (port 5985) or HTTPS (port 5986) on the Hyper-V host but the latter requires you to setup SSL Certificates which you can find more details here. For that reason, I just went with the default HTTP method to quickly get going. To configure winrm, you will need to run following command and accept the default with "y":

winrm quickconfig

Next, you will need to enable winrm listener as shown in the screenshot below by running the following command:

winrm e winrm/config/listener

vcenter-host-gateway-1
At this point, you can now login to your vSphere Web Client and to add a Hyper-V host, you will need to add at the vSphere Datacenter level. This was another thing that I missed and though could be added into its own vSphere Cluster. As you can see from the screenshot below, we have extended our "Add Host" workflow to natively support Hyper-V hosts, so that it is intuitive and familiar for our vSphere Administrators.

vcenter-host-gateway-0
You will need to specify both the Hostname/IP Address of Hyper-V host followed by the winrm port (e.g. hostname:5985) and then select the Type to be "Hyper-V" and in a just a few seconds, you will be able to see your Hyper-V hosts within vCenter Server and perform basic power operations as well as creating/managing VMs running on Hyper-V. Below is a screenshot of my Hyper-V host and I just finished created a new VM using the vSphere Web Client and you can see it seamless integrated into a single view.

vcenter-host-gateway-2

This is great enhancement for customers who have to run a mix workload between vSphere and Hyper-V (I do apologize to those in advance ;)) but at least you now truly now have a single integrated pane of glass to manage all your workloads. I also do want to stress the word "integrated" beyond just the UI component that vCHG provides. I have found that all the operations through the vSphere Web Client is also exposed through our rich vSphere API, for example the AddHost_Task() method now includes a new hostGateway spec. This also means you will be able to use all the existing power operations and create VMs methods against your Hyper-V hosts, again tightly integrated into the existing tools you are familiar with such as PowerCLI for example for Automation. How freaking cool is that!?

but wait ... there's more! 😀

While going through the exercise of deploying vCHG and adding Hyper-V host, I could not help but wonder why we named this feature "Host Gateway", especially since we only supported a single third party hypervisor, did not really make sense to me? Well, it turns out there is a lot more coming! When you select the "Type" from the drop down menu, I notice there were a few more options: KVM and vCloud Air!

vcenter-host-gateway-4
I of course I tried to add a KVM host as well as my vCloud Air account but looks like those providers are not available just yet. I am actually quite excited to see support for vCloud Air. This has always been something I felt should have been available natively within the vSphere Inventory so that an administrator could deploy their workloads either locally on-premises or hosted on vCloud Air without having to jump around. It should align with the existing vSphere Administrator workflows and I am glad to see this change. This is definitely an area that I recommend keeping an eye out on and hopefully we will see vCloud Air support soon!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: vCloud Air, vSphere 6.0, vSphere Web Client Tagged With: hyper-v, kvm, mhm, multi-hypervisor, vcenter host gateway, vchg, vcloud air

VMware has the best platform to run latest Windows 10 Desktop, Server & Hyper-V Tech Preview!

10/08/2014 by William Lam 6 Comments

I am constantly amazed at the number of guest operating systems that is supported on VMware products like VMware vSphere our Enterprise Hypervisor, vCloud Air our public cloud offering which runs on vSphere and our desktop products such as VMware Fusion and Workstation. If we just look at vSphere alone, it currently "lists" 101 supported guest operating systems! (full list below) However, this is actually a tiny subset of what is actually supported on vSphere as new guest OSes are constantly being added to the support matrix. This also does not include any pre-released operating systems like the recent Apple OS X Yosemite (10.10) Tech Preview. Heck, you can even run Windows 3.11 if you really want to as shown by my fellow VMware colleague Chris Colotti.

To get the complete list of currently supported operating systems for vSphere or any other VMware products, you will want to check the VMware HCL for Guest Operating Systems. Running a filter on latest ESXi 5.5 Update 2 release for all Guest OSes, we can see that the total number of supported Guest OSes is astounding 231! I know this number is even greater as we probably can not capture every single x86 Guest OS that exists out there today which can run on VMware.

Getting back to the topic of this post, I know Microsoft has recently released a new Tech Preview of their upcoming Windows platform dubbed Windows 10 (not a typo, they decided to skip Windows 9) and I know some of you may be interested in trying out their latest release. What better way than to run it on VMware? I know there was a blog or two about running Windows 10 on vSphere, however there was some incorrect information about not being able to install VMware Tools or getting the optimized VMXNET3 driver working. I decided to run all three flavors (Windows 10 Desktop, Server and Hyper-V) on the latest vSphere 5.5 release (should work on previous releases of 5.5) and will share the Virtual Machine configuration.

Note: You can also run Windows 10 Tech Preview on both VMware Fusion and Workstation, take a look at this article for more details. These are great options in addition to vSphere and vCloud Air.

Windows 10 Desktop:

  • GuestOS: Windows 8 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-desktop

Windows 10 Server:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS

windows10-server

Windows 10 Hyper-v:

  • GuestOS: Windows 2012 64-bit
  • Virtual HW: vHW10
  • Network Driver: VMXNET3
  • Storage Controller: LSI Logic SAS
  • CPU Advanced Setting: Enable VHV
  • VM Advanced Setting: hypervisor.cpuid.v0

For more details about running Hyper-V and the last two advanced settings, please take a look at this article on running other Hypervisors.

windows10-hyper-v
If you look closely at this last screenshot, you will see that I am not only running Windows 10 Hyper-V within a VM on ESXi, but I am also running a Nested Windows 10 VM within this Hyper-V VM! How cool is that!? Not sure there are good use cases for this, but if you wanted to, you could! In my opinion (although I may be bias because I work for VMware, but results speak for itself), VMware truly provides the best platform to the widest variety of x86 guest operating systems that exists.

Here are the guest operating systems that are currently "listed" in vSphere today that can be selected:

Apple Mac OS X 10.5 (32-bit)
Apple Mac OS X 10.5 (64-bit)
Apple Mac OS X 10.6 (32-bit)
Apple Mac OS X 10.6 (64-bit)
Apple Mac OS X 10.7 (32-bit)
Apple Mac OS X 10.7 (64-bit)
Apple Mac OS X 10.8 (64-bit)
Apple Mac OS X 10.9 (64-bit)
Asianux 3 (32-bit)
Asianux 3 (64-bit)
Asianux 4 (32-bit)
Asianux 4 (64-bit)
CentOS 4/5/6 (32-bit)
CentOS 4/5/6/7 (64-bit)
Debian GNU/Linux 4 (32-bit)
Debian GNU/Linux 4 (64-bit)
Debian GNU/Linux 5 (32-bit)
Debian GNU/Linux 5 (64-bit)
Debian GNU/Linux 6 (32-bit)
Debian GNU/Linux 6 (64-bit)
Debian GNU/Linux 7 (32-bit)
Debian GNU/Linux 7 (64-bit)
FreeBSD (32-bit)
FreeBSD (64-bit)
IBM OS/2
Microsoft MS-DOS
Microsoft Small Business Server 2003
Microsoft Windows 2000
Microsoft Windows 2000 Professional
Microsoft Windows 2000 Server
Microsoft Windows 3.1
Microsoft Windows 7 (32-bit)
Microsoft Windows 7 (64-bit)
Microsoft Windows 8 (32-bit)
Microsoft Windows 8 (64-bit)
Microsoft Windows 95
Microsoft Windows 98
Microsoft Windows NT
Microsoft Windows Server 2003 (32-bit)
Microsoft Windows Server 2003 (64-bit)
Microsoft Windows Server 2003 Datacenter (32-bit)
Microsoft Windows Server 2003 Datacenter (64-bit)
Microsoft Windows Server 2003 Standard (32-bit)
Microsoft Windows Server 2003 Standard (64-bit)
Microsoft Windows Server 2003 Web Edition (32-bit)
Microsoft Windows Server 2008 (32-bit)
Microsoft Windows Server 2008 (64-bit)
Microsoft Windows Server 2008 R2 (64-bit)
Microsoft Windows Server 2012 (64-bit)
Microsoft Windows Vista (32-bit)
Microsoft Windows Vista (64-bit)
Microsoft Windows XP Professional (32-bit)
Microsoft Windows XP Professional (64-bit)
Novell NetWare 5.1
Novell NetWare 6.x
Novell Open Enterprise Server
Oracle Linux 4/5/6 (32-bit)
Oracle Linux 4/5/6/7 (64-bit)
Oracle Solaris 10 (32-bit)
Oracle Solaris 10 (64-bit)
Oracle Solaris 11 (64-bit)
Other (32-bit)
Other (64-bit)
Other 2.4.x Linux (32-bit)
Other 2.4.x Linux (64-bit)
Other 2.6.x Linux (32-bit)
Other 2.6.x Linux (64-bit)
Other 3.x Linux (32-bit)
Other 3.x Linux (64-bit)
Other Linux (32-bit)
Other Linux (64-bit)
Red Hat Enterprise Linux 2.1
Red Hat Enterprise Linux 3 (32-bit)
Other (32-bit)
Red Hat Enterprise Linux 3 (64-bit)
Red Hat Enterprise Linux 4 (32-bit)
Red Hat Enterprise Linux 4 (64-bit)
Red Hat Enterprise Linux 5 (32-bit)
Red Hat Enterprise Linux 5 (64-bit)
Red Hat Enterprise Linux 6 (32-bit)
Red Hat Enterprise Linux 6 (64-bit)
Red Hat Enterprise Linux 7 (32-bit)
Red Hat Enterprise Linux 7 (64-bit)
SCO OpenServer 5
SCO OpenServer 6
SCO UnixWare 7
SUSE Linux Enterprise 10 (32-bit)
SUSE Linux Enterprise 10 (64-bit)
SUSE Linux Enterprise 11 (32-bit)
SUSE Linux Enterprise 11 (64-bit)
SUSE Linux Enterprise 12 (32-bit)
SUSE Linux Enterprise 12 (64-bit)
SUSE Linux Enterprise 8/9 (32-bit)
SUSE Linux Enterprise 8/9 (64-bit)
Serenity Systems eComStation 1
Serenity Systems eComStation 2
Sun Microsystems Solaris 8
Sun Microsystems Solaris 9
Ubuntu Linux (32-bit)
Ubuntu Linux (64-bit)
VMware ESX 4.x
VMware ESXi 5.x

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Nested Virtualization, vSphere Tagged With: esxi, guest os, hyper-v, Microsoft, vSphere, windows 10

Quick Tip: New Hyper-V guestOS identifier in vSphere 5.5

09/26/2013 by William Lam 16 Comments

For those of you who are so inclined to run Hyper-V as a nested VM on ESXi 5.5 (not sure why anyone would want to do such a thing), you should be aware that there is a new guestOS identifier that you can configure your VM to for the most optimal configuration. The main reasons you would want to use this configuration is that by default Windows Enlightenment is enabled and this will prevent Hyper-V from running as it will detect it is running inside of a VM. This configuration will disable Windows Enlightenment to allow you to run Hyper-V.

I noticed a new guestOS identifier called "windowsHyperVGuest" while browsing through the vSphere 5.5 API Reference guide, but when I checked the vSphere Web Client, I did not see this guestOS type as an available option. Perhaps this was not a supported guestOS, after all Nested Virtualization is not officially supported by VMware. In any case, you can still configure your VM by leveraging the vSphere API.

Here is a quick vSphere SDK for Perl script called changeGuestOSID.pl which allows you to reconfigure a VM with a valid guestOS identifier from the vSphere API Reference guide. You can of course easily do this using PowerCLI as well as any other language that can speak to the vSphere API.

Once updated, you should now see it reflected in both the vSphere Web/C# Client:

Note: I did not do extensive testing other than basic installation of latest Hyper-V Server and I do not believe you need any additional settings. If you wish to run nested 64-bit VMs, then you will need to enable VHV.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: vSphere 5.5 Tagged With: esxi 5.5, hyper-v, nested, nested virtualization, vSphere 5.5

Nested Virtualization Resources

10/04/2012 by William Lam 7 Comments

Here is a consolidated page on all the articles that I have written about the Nested Virtualizatoin (nested ESXi, Hyper-V, etc) and all the goodies that are "Not Supported".

vSphere / vCloud 5.1

  • Having Difficulties Enabling Nested ESXi in vSphere 5.1?
  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

vSphere / vCloud 5.0

  • How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud

Additional Info/Tips/Tricks/

  • Nested ESXi 5.1 Supports VMXNET3 Network Adapter Type
  • How to Configure Nested ESXi 5 to Support EVC Clusters
  • How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5
  • How to Install VMware VSA in Nested ESXi 5 Host Using the GUI
  • Cool Undocumented Features in vCloud Director 1.5
  • The Missing Piece In Creating Your Own Ghetto vSEL Cloud
  • Nested Virtualization APIs For vSphere & vCloud Director 5.1
  • How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director 
  • Will Intel’s VMCS Shadowing Feature Benefit VMware’s Nested Virtualization?
  • How to run Nested RHEV Hypervisor on ESXi? 
  • How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi
  • How to run Nested ESXi on top of a VSAN datastore? 
  • VMware Tools for Nested ESXi 
  • Why is Promiscuous Mode & Forged Transmits required for Nested ESXi?
  • How to properly clone a Nested ESXi VM?
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: amd-v, ept, esxi, esxi 5, esxi4, esxi4.1, esxi5.1, hyper-v, intel vt, nested, rvi, vhv, virtual hardware virtualization, vSphere, vSphere 4, vSphere 5, vSphere 5.1

Having Difficulties Enabling Nested ESXi in vSphere 5.1?

09/29/2012 by William Lam 21 Comments

I noticed there were a few folks having some difficulties enabling Nested ESXi (VHV Virtual Hardware Virtualization) in the latest release of ESXi 5.1 and I thought I share some additional info and tips on troubleshooting your setup in case you are running into similar problems.

*** DISCLAIMER **** This is not officially supported by VMware, do not bother asking if it is supported or calling into VMware support for details or help.

If you wish to run nested ESXi or other hypervisors on ESXi 5.1 and run 32-bit nested virtual machines, you must meet the following hardware requirement:

  • CPU supporting Intel VT-x or AMD-V

If you wish to run nested 64-bit virtual machines in your nested ESXi or other hypervisors, in addition to the requirement above, you must also meet the following hardware requirement:

  • CPU supporting Intel EPT or AMD RVI

If you only meet the first criteria, you CAN still install nested ESXi or other hypervisors on ESXi 5.1, BUT you will only be able to run 32-bit nested virtual machines. When you create your virtual machine shell using the new vSphere Web Client, in the expanded CPU view, the "Hardware Virtualization" box will be grayed out. This is expected as you do not have full support for VHV, but you can still continue with your installation of ESXi or other hypervisors.

In ESXi 5.0, you may have been able to run 64-bit nested virtual machines without EPT/RVI support but performance was extremely poor. With ESXi 5.1, VHV now requires EPT/RVI.

Note: During the installation of ESXi, you may see the following message "No Hardware Virtualization Support", you can just ignore it.

If you are using sites such as Intel's ark.intel.com to check your CPU requirements, be aware that it is COMMON even for the hardware vendors to publish incorrect information about their websites. However, there is a quick way you can validate on your ESXi host whether you have full VHV support.

In vSphere 5.1, there is a new capability property called nestedHVSupported which specifies whether your physical ESXi 5.1 host has full VHV support. This property will only be true IF your CPU has both Intel-VT+EPT or AMD-V+RVI. A quick and easy way to validate this is using the vSphere MOB to retrieve the value.

To check nestedHVSupported property, please enter the following into a web browser (substitute the IP Address/hostname of your ESXi host):

https://himalaya.primp-industries.com/mob/?moid=ha-host&doPath=capability

After you login, search for the nestedHVSupported property on the page and you should see a value of either true or false. As mentioned earlier, if it is false, you might still be able to install nested ESXi or other hypervisors but you will not be able to run nested 64-bit virtual machines. I would also recommend taking a look at your system BIOS to ensure things like Intel-VT/EPT and AMD-V/RVI are enabled and sometimes it might just be as simple as a BIOS upgrade (you can always confirm by contacting the hardware vendor if you have further questions).

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.

Additional Resources: 

  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi5.1, hyper-v, nested, vcd, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

08/29/2012 by William Lam 5 Comments

The process to enable  "Nested Virtualization" in the latest release of vCloud Director 5.1 and create your own virtual lab similar to VMware's vSEL (Virtual Sales Enablement Cloud) is very similar to the previous steps outlined for vCloud Director 1.5 release. The only change is how VHV (Virtual Hardware-Assisted Virtualization) aka "Nested Virtualization" is enabled in vCloud Director 5.1 and ESXi 5.1.

In the vCloud Director 1.5, to enable VHV, you needed to add a special SQL statement that would enable VHV for the underlying ESXi 5.0 hosts. With the latest release of vCloud Director 5.1, that is no longer necessary and you now enable it on a Per VM basis within the vCloud Director 5.1 UI.

Here are the steps for enabling VHV for vCloud Director 5.1

  • Insert SQL statements into VCD Database that perform the following:
    • Enable new "VMware" guestOS Family
    • Enable new guestOS Type ESXi 4.x and 5.x
    • Enable host preparation to enable VHV (vSphere 5.0 & vCloud 1.5 only)
  • Enable promiscuous mode
    • Insert SQL statement into VCD Database for Network Pool that is being used for your ESXi VMs
    • Enable both Promiscuous Mode and Forged Transmit for vSphere Backed Portgroup within vCenter Server or ESXi host

The SQL statements can be found in this article and have not changed for vCloud Director 5.1

Here is a screenshot of what you should see in the vCloud Director 5.1 UI for creating a new VM and you should now have the ability to select a new guestOS Type called "VMware" and select either an ESXi 4.x or ESXi 5.x guestOS Version.

To enable VHV for the VM, you will need to also check the box "Exposed hardware-assisted CPU virtualization to guestOS" and this will allow you to run a nested ESXi VM as well as 64-bit nested VMs, assuming your physical CPUs support it. To learn more about running VHV on ESXi 5.1, take a look at this article here for more details.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi5.1, hyper-v, nested, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy