I have been spending quite a bit of time in the lab lately working with some of our "future" software and one of the fun tasks I get to do is perform frequent rebuilds of my lab environment. Depending on the issues I encounter, I may even need to rebuild it on a daily basis and of course I have the majority of this automated so it is not as painful as it would be if I had to go through this manually.

The output of this build is a complete working vSphere environment that consists of several ESXi hosts connected to a vCenter Server with all the network and storage configured. On the networking front, the ESXi hosts were all running on a regular Virtual Standard Switch (VSS) and I needed to migrate them over to a Virtual Distributed Switch (VDS). In this particular environment, there is some Windows infrastructure and I thought about the different ways I could accomplish this and I remember hearing about some new VDS cmdlets that came out of PowerCLI 5.5. release.

Since I already had some scripts being kicked off on this Windows system, I thought I give the new PowerCLI cmdlets a try for VSS->VDS migration as I have heard good things about the new cmdlets. I performed my prototyping on a vSphere 5.5 environment, but I believe you might even be able to use this on older releases of vSphere.

Here is a list of the new VDS cmdlets that I used for the script:

Here are additional vSphere networking cmdlets that were required for script:

Even as a beginner of PowerCLI, I was able to quickly knock out a script that performed the migration from VSS to VDS and was able migrate ALL VMkernel interfaces and physical interfaces without any downtime. These new cmdlets definitely make it very easy for administrators to go from old Virtual Standard Switch over to the vSphere Distributed Switch.

Here is a overview of what my environment looks like which consists of three ESXi hosts with four physical NICs and three VMkernel interfaces.

The script below will create a brand new VDS and their associated Distributed Portgroups and attach a list of ESXi hosts which is configurable and performs the migration of VMkernel and physical interfaces. It does this by first moving two of the four physical NICs to the new VDS to ensure connectivity and then starts the migration of all VMkernel interfaces. Once that is complete, it will move the remainder physical NICs and then delete the Virtual Stand Switch portgroups.

Disclaimer: Please ensure you test this script in a development/test lab before using it in a production environment.

Here is a screenshot of running through the script:

If we now take a look at our enviornment, we can see all three ESXi hosts have been migrated over to the VDS.

UPDATE (11/4/13) -  Thanks to one of the PowerCLI engineers, it looks like there is a PowerCLI cmdlet that can be used to migrate from VDS->VSS. I will be sharing that script in another blog post for those that may want to perform the reverse.

One caveat that I hit during the development of this script is needing the ability to easily migrate between VSS->VDS and VDS->VSS. I was hoping it was simply reversing the set of operations and moving the VMkernel interfaces back to the Virtual Standard Switch but what I found for the Set-VMHostNetworkAdapter cmdlet is that it only accepts a Distributed Virtual Portgroup. This meant that I could only migrate to a VDS but not to a VSS. Though this will probably will fit the majority of customer use cases, for me this was a problem and means I will need to dig into the vSphere APIs to be able to seamlessly perform a VDS->VSS migration. Given that PowerCLI is an abstraction, we should be able to easily add this feature and I will be filing an FR with Engineering to see if we can get this added as I think it would be a useful feature to have.

18 thoughts on “Automate the migration from Virtual Standard Switch to vSphere Distributed Switch using PowerCLI 5.5

  1. Hi, can you tell or show a diagram of your complete setup, to use this script.
    I have tried for weeks to get Distribute switch working with vCenter Appliance, but fail every time.

    I want to start out with 1 server, then add the other 2 server if this work.
    I don’t use AD outside my Virtual Network, because i am going to build one in my virtual network, to be use by xenDesktop 7.1 & Horizon Suite, Exchange 2013, SQL 2014, Server 2012 R2, Windows xp/7/8/8.1.

    how do i setup the nics on VSS so that i can migrate from VSS to VDS without down time, can this be demoed using VMWare workstation?

    1) On this 1 server, i have installed esxi 5.5 and deploy the vCSA 5.5 ova
    2) I then modify both esxi & vCSA hosts file to find each other:
    127.0.0.1 localhost
    ::1 localhost
    192.168.163.253 esxi01.example.lan esxi01
    192.168.163.252 vcenter.example.lan vcenter
    3) Configure esxi01 time and the normal stuff.
    4) Configure vcenter to the default stuff using https://vcenter:5480
    4a) after that i use https://vcenter:9443/ to access the new web client
    then created a DataCenter, the a cluster (did not enable any thing), then add the esxi01 host.
    then created a Distribute Switch, 8 uplinks, untick the create portgroup.
    then when into the VDS and created these port group:
    Management, LAN, WAN, iSCSI1, iSCSI2, vMotion, FT
    I have 8 1GB nics (well 9 if you count the iLO port), i want to use 1-for-Management, 2-for-LAN,
    1-for-WAN, 1-for-iSCSI1, 1-for-iSCSI2, 2-for-vMotion, 1-for-FT

    after this point all hell broke loss.

    Quesion:
    1) how do i assign a real network card (pNic) the uplinks
    2) Can i add the host to the VDS without moving the VSS, then mirage the VSS pNics & VMs to VDS
    3) Can you do a write up or video to how to go about doing this

    Thanks

    • Before looking at any automation, you should get yourself a bit more familiar with how the VDS works as well as the migration process. Once you have a good handle on that, then the script should make a lot more sense. Also the script is just an example, you can easily modify it to fit your needs.

  2. Question2:

    Every esxi server that you installed auto create the VSS with a Management & VM Network, if i setup a esxi server with VDS as my first server the add the second server will the second server VSS be moved to the VDS or do i have to run a script to move it to the VDS?

    Thanks

  3. Question 3: Can you put all the starting variables at the top of your script eg:

    max_esxi=3
    max_pgroup=4
    max_pNics=4
    root_username=”root”
    root_password=”password”
    esxi_servers=( “esxi1.example.lan”, “esxi2.example.lan”, “esxi3.example.lan”)
    vds_name=”VDS-01″
    port_groups=(“Management-v5”, “VM-Network-v10”, “vMotion-v7”, “FT-v6”)

    Question 4:

    1) What machine is this script run on?
    2) Can it be run from the vCSA directly?
    3) You said you have everything scripted dose that include the installing of esxi 5.5 & vCSA plus config.
    4) Can you do a write up for scripting the setup below:
    PXE boot install of esxi 5.5 ( from a xp or win7 pc), config & setup with iSCSI from 192.168.163.3
    with auto install of vCSA 5.5 + auto update to 5.5.0a
    with auto create “DataCenter” + “Cluster” + add host
    and auto migrate of VSS to VDS
    and set VM Network as the default for VMs on the VDS and not the VSS.

    Thanks.

  4. Very Nice. Thanks for sharing. I just got done migrating the VSS to a VDS on 15 hosts after trying to do it w/ PS for about 15 minutes I had to go to manual mode due to time. This one goes in the toolbox.

  5. William : Thks for sharing this. I am able to run the script successfully only if I omit migrating the management virtual adapter, vmk0. If I do not omit that, then the ESXi hosts looses network connectivity at that step,

    My setup has 2 nics for each of the 2 vSS Switches (1 for Management and 1 for VM data) and both have allow both vmotion & management traffic for the management vSS. I can migrate the vmotion virtual adapter, vmk1,first and all is good. But when I try to do the same for the management virtual adapter, vmk0, the host looses connectivity and I have to restore the settings via DCUI.

    Let me know if there is something I am missing as I have looked and looked again but my config seems proper and it should really work.
    Much Thks

      • Yeah so that is the interesting part. A manual migration has no problems. I can migrate all the nics and virtual adapters without an issue. I am just curious about the script that u provided; did that actually complete without any errors?
        Thks

  6. how would you perform this when you have only single physical adapter having vss and want to migrate to dvs without huge outage?

  7. I had a problem with the script where, if my other host in the cluster does not have same portgroup, it fails to migrate the vms.

    Actually I was wondering how you can modify the script to allow migration on one host only vs all 3 hosts at same time.

    • Joey,

      To add/remove hosts, you just need to modify the vmhost_array variable 🙂 Didn’t think it could have been missed as it’s at the very top of the script

  8. Hello William,
    I have a special situation. Our 3 Esx hosts Vmware 5.5 running on 9 NICs 1 Gbit. The Host has also 4 NICs 10 Gbit (PCI dual NIC and one dual on Board) which are not all in use. Only two Ports on PCI card are in use as 1 Gbit.
    Now we have installed the 10 Gbit Cisco Switch 3850.
    I have to move the Networking form 1 Gbit on all 3 Hosts to 10 Git NICs.
    Now there are 4 Standard vSwitch.
    I intend to move it to max. 2 Standard vSwitch with 4 x 10 Gbit NICs plus one 1 Gbit NIC for Management.

    How can I get it to be done? I can take one by one host to Maintenance mode.
    Which steps would be the best?

    Many Thanks
    Best Regards
    Nebi

  9. This is great little script. Thanks for taking the time to put it together. I’m new to PowerCLI so I can’t figure out how to do something a little more advanced with this script. The only thing I’d like to figure out is how to have the script automatically extract the names of all the hosts in the cluster and then apply all changes to what it finds rather than manually typing in the name of each host as an array.

    I found a script to get the name of each host and display it in a column but I can’t figure out how to instead just get the names then apply your script to each host found in the cluster. I would prefer to not have to reference a csv with hostnames because I would have to make many scripts that each reference a different csv file.

    I am asking because I’m doing a huge deployment and need to put many clusters onto distributed switches.

    Here is the one I found to retrieve host names:

    $myCol = @()
    ForEach ($Cluster in Get-Cluster)
    {
    ForEach ($vmhost in ($cluster | Get-VMHost))
    {
    $VMView = $VMhost | Get-View
    $VMSummary = “” | Select HostName
    $VMSummary.HostName = $VMhost.Name
    $myCol += $VMSummary
    }
    }
    $myCol #| out-gridview

    Thanks!

  10. With the advent of vSphere 6.0 it is now possible, via the Web Client only, to do Cross Cluster migrations, the extra step to modify the network of the VM to the destination cluster networks is great. What I would like to know is if there is a Powercli cmdlet, or combination of cmndlets, that would utilize this new function? Currently the only way I can work powercli is to change the VM network to a vSwitch portgroup and then disconnect the NIC on the VM. This will then allow me to migrate the VM to new cluster. Then assign the VM the appropriate network, reconnect the NIC. This method of migration takes the VM off the network for the duration of the migration. It would be great to be able to utilize the cross cluster migration without network loss. Any guidance would be greatly appreciated.

    Thanks
    Neil

Thanks for the comment!