Bulk VM Migration using new Cross vCenter vMotion Utility Fling

Over the last few years, I have spoken to a number of customers who have greatly benefited from the ability to live migrate Virtual Machines across different vCenter Servers that are NOT part of the same vCenter Single Sign-On (SSO) Domain, which I had first shared back in 2015 here and here. This extended capability of the Cross vCenter vMotion feature enabled customers to solve new use cases that were challenging, especially for scenarios such as Datacenter migration, consolidation or even migrating existing workloads from their current environment into new SDDC deployments such as VMware Cloud Foundation (VCF) as an example.

Although customers could initiate Cross vCenter vMotions using the vSphere API which included PowerCLI (Move-VM cmdlet was enhanced in 6.5, more details here), the overall experience was still not as friendly. This was especially true for customers who may only have a small number of VMs to migrate and prefer a UI-based interface rather than an API/CLI only option. In addition, for large number of VM migrations, there was not an easy way to perform "batch" VM migrations that was easily consumable for folks who may not have a strong background in Automation or the vSphere APIs.

Today, I am pleased to share a new VMware Fling called the Cross vCenter Migration Utility that will help simplify the consumption of initiating VM migration(s) across different vCenter Servers, especially between dispart SSO Domains where a graphical interface was not available. This solution was developed out of our VMware Cloud Foundation (VCF) Engineering group which is part of the Integrated Systems Business Unit at VMware. I had spoken to a number of folks within the group about the extended Cross vCenter vMotion capability and I was super excited when I heard they were planning to release this tool as a Fling and make it available to all customers. I was fortunately to have been involved in the project alongside the Engineering lead Vishal Gupta and we are excited that we can finally talk about this project and see how customers will be using this new tool.

Cross vCenter Migration Utility Fling

Cross vCenter vMotion Requirements: KB 2106952

Download Fling here


Features

  • Completely UI-driven workflow for VM migration
  • Provides REST API for managing migration operations
  • Works with vCenter not part of the same SSO domain
  • Supports both live/cold migration of VMs
  • Batch migration of multiple VMs in parallel
  • Flexible network mappings b/w source and destination sites

Continue reading

Deploying NSX-T VIBs and/or creating custom NSX-T Image Profile

Similiar to its earlier predecessor, NSX-T also provides complete lifecycle management (LCM) of its underlying NSX components (Controllers, Edges and Managers) including the Fabric Nodes (e.g. ESXi and/or KVM hosts). Additionally, a new Upgrade Coordinator is now part of NSX-T which greatly simplifies the patching and updating of the network virtualization platform. However, for existing vSphere customers who already have a process for distributing VMware VIBs using vSphere Update Manager (VUM) and/or custom Image Profiles, being able to leverage their existing methods is quite important. This is especially true for customers or system integrators who wish to slipstream all necessary VIBs as part of their base ESXi image for initial deployment which may come in the form of an automated installation via Kickstart and/or even manual install using an ISO image.

The good news is that like NSX-V, NSX-T also supports the same set of deployment methods that customers are already familiar with. I had recently looked into this due to a few questions that I and a few other folks had during our NSX-T Bootcamp training a couple of weeks back. I also did not see anything in the existing NSX-T documentation and figure it would be useful to outline the specific steps for each of the installation methods, especially when creating a custom ESXi Image Profile using PowerCLI which requires a particular order.

Note: Auto Deploy is currently not supported with NSX-T.

Continue reading

VMware Tools 10.2.0 enables Virtual Machine vNIC exclusion and priority re-ordering

VMware Tools 10.2.0 just GA'ed (release notes / download and open-vm-tools release notes / open-vm-tools download) and this release includes a number of new features like an offline bundle for VMware Tools VIB for ESXi and support for deploying VMware Tools using Microsoft System Center Configuration Manager (SCCM) to just name a few. There are also two additional new capabilities that I wanted to share as I think customers can benefit from and take advantage of immediately around how Virtual Machine vNICs are displayed. One of the challenges with having the broadest Guest Operating System (GOS) support in vSphere is dealing with some of the different behaviors of each GOS. One such example are the various ways in how both physical and logical networks interfaces are enumerated by an OS.

Take the example below, I have a PhotonOS VM which has eth0 as the primary interface and it is configured with an IP Address of 192.168.30.101. However, as you can see from the screenshot below I am actually getting back a different address and interface. In addition to this, we also see other logical interfaces showing up in the IP Address list such as Docker interfaces as well as virtual and other pseudo interfaces that may or may not be useful to VI Admins.


Historically, there was not a way to control what would show up in the network interface list which is then propagated from VMware Tools up to both the vSphere API as well as vSphere UI. With this new release of VMware Tools, which can be applied asynchronously to a given vSphere release, customers now have the ability to filter on a per-VM basis on what interfaces actually show up as well as a relative priority for interfaces that customers care more about.

Continue reading

Getting started with Hybrid Cloud Extension (HCX) on VMware Cloud on AWS

I had been hearing a lot of cool things about VMware's Hybrid Cloud Extension (HCX) but never tried the solution myself nor had a good understanding of what it actually provided. With the recently announced Hybrid Cloud Extension (HCX) on VMware Cloud on AWS (VMWonAWS) offering being available, I thought this was a great way to get hands on with HCX and take advantage of my VMWonAWS infrastructure. Having only spent a couple of days with the solution, I can see why customers are excited for HCX and the new offering on VMWonAWS makes it super easy to consume.

There are a number of impressive capabilities that HCX offers, but two that really stood out to me which I thought was quite unique and interesting compared to other VM-based "migration" options. The first is that HCX can perform live VM migrations (vMotion) or replicated migrations (vSphere Replication) which includes scheduled switch over across different versions of vSphere (vSphere 5.x to/from vSphere 6.x). This is great for customers who may not be able to upgrade their underlying vSphere environment to 6.0 or later and take advantage of things like Cross vCenter vMotion feature which only supports VM migration between vSphere 6.0u3 to/from 6.x.

The second capability is that HCX can abstract and protect the underlying ESXi hosts by not requiring direct connectivity between the source and destination ESXi hosts. Traditionally, for vMotion and vSphere Replication traffic, you either had to stretch the VLAN or ensure the VMkernel interface was routable so that it can communicate with the destination ESXi hosts for data transfers. This was not always possible and adds additional networking requirements which can be challenging to implement depending on how your network infrastructure is configured. The way HCX solves this problem is by using a special HCX Cloud Gateway which securely proxy vMotion and vSphere Replication traffic from the on-premises environment out to the respective HCX Cloud Gateway Peer which then gets transfered to destination vSphere environment. Below is a diagram to help illustrate this:


Note: HCX also supports WAN optimization (compression and de-duplication) out of the box, which the diagram includes as that is what I had deployed in my env. This is an optional virtual appliance that can be deployed at each location ensuring efficient data transfer between the source and destination vSphere environments.

While going through and getting HCX configured on both my VMWonAWS and onPrem environment, I had ran into a few minor gotchas and to help others avoid some of the issues I had ran into, I figure I would outline the process and include some additional tips that can be help.

Continue reading

Can the VCSA 6.5 forward to multiple syslog targets?

I had a couple folks ping me recently asking whether the latest vCenter Server Appliance (VCSA) 6.5 release supports forwarding to multiple syslog targets? Currently today, only a single syslog target is officially supported which can be configured using the VAMI UI. I know this is something our customers have been asking about and I know this is something the VC Engineering team is considering.

Having said that, it is possible to configure additional syslog targets on the VCSA, but please be aware this is not officially supported. A couple of these customers understood the support impact and were still interested in a solution as some of their environments mandated multiple redundant syslog targets and using a syslog forwarder/relay was not an option for them.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

When configuring syslog forwarding from the VAMI UI, the configurations are all written to /etc/vmware-syslog/syslog.conf on the VCSA.

With this information, if we want to add additional targets (which can be of the same configuration or different), you simply append additional targets to the syslog configuration file. For example, if I have two syslog targets 192.168.30.110 and 192.168.30.111 and I wish to use the default log level, TCP and 514, I would use the following:

Once you have saved your changes, you will need to restart the rsyslog service for the change to go into effect. To do so, run the following two commands on the VCSA:

systemctl stop rsyslog
systemctl start rsyslog

One additional thing to note is that the VAMI UI will only show the very last syslog target within the configuration file but if you monitor syslog servers, you will see that logs are indeed being forward to all servers that have been configured in the syslog configuration file.