One of my favorite but also most stressful part of preparing for a VMworld session is creating the demos. Even with a "virtual" VMworld this year, I personally felt it was even more stressful than a physical VMworld.
I have been presenting with Emad Younis for a number of years now and every year, we always end up with crazy ideas without thinking through all the feasibility aspects. This year was certainly no different and while working on our demo this year, I was seriously questioning my sanity and even the actual return on investment (ROI), if such a thing exists!? 😂
In case you have not watched our session, check out HCP132: Planes, Trains and Workload Mobility, you can watch it for free and see the full demo.
Best session I have seen so far👏😁
— Wesley Geelhoed (@wessieloerus) September 30, 2020
We really appreciate all the feedback and it definitely made up for some of the late nights where I was about to give up. I know a few of you were asking for more details about the demo and so this blog post will be focusing on some of the information I was not able to get to during the VMworld session.
I will assume you have watched our VMworld session already. If you have not, definitely go check it out!
At a high level, each voice assistance device is connected to their respective cloud service which then exposes the ability to communicate with a custom HTTP(s) endpoint.
- Amazon - Echo Dot with Alexa Skills Service
- Microsoft - Cortana Desktop with Azure Cognitive Service
- Google - Google Mini Home with Diagflow
Note: The other really cool thing is that you do not need to own an Amazon Echo Dot or Google Mini Home to use the voice assistance services, each cloud provider actually has a simulator that works with your actual code, which is how I initially tested the solution.
To communicate with each voice assistance service, I am using Flask which is an easy to use Python framework for building custom web applications and ngrok to then serve up our Flask application over the internet which can run behind a NAT/Firewall, very useful for testing purposes.
Each SDDC (VMware Cloud on AWS, Azure VMware Solution and Google Cloud VMware Engine) is a paired with our on-premises vSphere environment using a single VMware Hybrid Cloud Extension (HCX) instance. To interact and perform operations within VMware HCX and ultimately move workloads to our respective SDDCs, the Python Flask application is using both the vSphere and HCX REST API.
Since HCX was common to all three VMware Cloud SDDCs, I created a single HCX python helper library that took care of processing the list of VMs to be migrated based on a vSphere Tag and constructed the HCX Mobility Group and initiate the migration. Each SDDC would have a tiny VMware Photon OS VM that would run the Flask application and be able to communicate with the SDDC and HCX APIs.
You can find all the code in my Github repo: https://github.com/lamw/vmworld-2020-vmware-cloud-demo
Although I only had a couple of weeks to build these demos, I was really impressed with what you could do in such a short amount of time. I was also surprised at how far natural language processing has evolved and there were so many capabilities that I did not get a chance to use in these prototypes to make the solution even more robust. For those interested, definitely spend some time with the respective voice services as there is a ton of content available online and best of all, you can get started all fore free across the three public cloud providers.
Here are the references I had used:
Although voice can be used to do some really cool things, a more practical way of automating workload migration is through the use of PowerCLI. In this blog post, I share a simple PowerCLI script that leverages the HCX cmdlets to construct HCX Mobility Groups comprises of any VMs with a specific vSphere Tag.
Lastly, here are the demo recordings of me interacting with each of the voice assistance devices for those interested.