Support your Virtualization Bloggers by voting for Top vBlog 2016

It is that time of the year again, Eric Siebert who runs the popular vSphere-land.com website has just opened up the voting polls for the Top 25 Virtualization Blogs of 2016. There are over 300+ bloggers this year and it is a very impressive list! Here is your chance to show your support for your favorite bloggers by casting a vote which only takes a few minutes. Before voting, be sure to check out Eric's blog post on the criteria's you should consider when voting such as Longevity, Length, Frequency & Quality.

Lastly, I want to thank Eric for all of his hard work for putting this together year after year. I know he spends an enormous amount of time and energy to make this happen and make sure to support Eric and his sponsors by visiting their sites as this would not be possible without them. Happy voting!

20150211_203831-small

Vote now!

Generating vCenter Server & Platform Services Controller deployment topology diagrams

A really useful capability that vCenter Server used to provide was a feature called vCenter Maps. I say "used to" because this feature was only available when using the vSphere C# Client and was not available in the vSphere Web Client. vCenter Maps provided a visual representation of your vCenter Server inventory along with the different relationships between your Virtual Machines, Hosts, Networks and Datastores. There were a variety of use cases for this feature but it was especially useful when it came to troubleshooting storage or networking connectivity. An administrator could quickly identify if they had an ESXi host that was not connected to the right datastore for example with just a few clicks.

vcenter_server_and_platform_services_controller_topology_diagram_3
Although much of this information can be obtained either manually or programmatically using the vSphere API, the consumption of this data can sometimes be more effective when it is visualized.

I was recently reminded of the vCenter Maps feature as I have seen an increase in discussions around the different vSphere 6.0 deployment topology options. This is an area where I think we could have leveraged visualizations to provide a better user experience to help our customers understand what they have deployed as it relates to install, upgrade and expansion of their vSphere environment. Today, this information is spread across a variety interfaces ranging from the vSphere Web Client (here and here) as well as across different CLIs (here and here) and there is nothing that aggregates all of this dispart information into an easy to consume manner. Collecting this information can also be challenging as you scale up the number of environments you are managing or dealing with complex deployments that can also span multiple sites.

Would it not be cool if you could easily extract and visualize your vSphere 6.0 deployment topology? 🙂

Well, this was a little side project I recently took up. I have created a small python script called extract_vsphere_deployment_topology.py that can run on either a Windows Platform Services Controller (PSC) or a vCenter Server Appliance (VCSA) PSC and from that system extract the current vSphere deployment topology which includes details about the individual vCenter Servers, SSO Sites as well as the PSC replication agreements. The result of the script is outputted in the DOT format, a popular graph description language which can then be used to generate a diagram like the example shown below.vcenter_server_and_platform_services_controller_topology_diagram_0Requirements:

  • vSphere 6.0 environment
  • Access to either a Windows or VCSA PSC as a System Administrator
  • SSO Administrator credentials

Step 1 - Download the extract_vsphere_deployment_topology.py python script to either your Windows vCenter Server PSC or vCenter Server Appliance (VCSA) PSC.

Step 2 - To run on a vCenter Server Appliance (VCSA) PSC, you will need to first set the script to an executable by running the following command:

chmod +x extract_vsphere_deployment_topology.py

To run on a vCenter Server for Windows PSC, you will need to first update your environmental PATH variable to include the python interpreter. Follow the directions here if you have never done this before and add C:\Program Files\VMware\vCenter Server\python

Step 3 - The script requires that you provide an SSO Administrator username and password. You can specify everything in the command-line or you omit the password in which you would then be prompted to enter.

To run the script on a VCSA PSC, run the following command specifying your credentials:

./extract_vsphere_deployment_topology.py  -u administrator@vghetto.local -p VMware1!

To run the script on Windows VC PSC, run the following command specifying your credentials:

python C:\Users\primp\Desktop\extract_vsphere_deployment_topology.py  -u administrator@vsphere.local -p VMware1!

Here is an example output from one of my environments.

Step 4 - Save the output from the script and then open a browser that has internet access to the following URL: http://www.webgraphviz.com Paste the output and then click on the "Generate Graph" which will generate a visual diagram of your vSphere deployment. Hopefully it is pretty straight forward to understand and I have also colorized the nodes to represent the different functionality such as Blue for a vCenter Server and Green for Platform Services Controller.

vcenter_server_and_platform_services_controller_topology_diagram_4
In addition, if you have deployed an Embedded vCenter Server which is replicating with an External PSC (which is considered a deprecated topology and will not be supported in the future), you will notice the node is colored Orange instead as seen in the example below.

vcenter_server_and_platform_services_controller_topology_diagram_1
This is pretty cool if you ask me! 😀 Just imagine the possibilities if you could use such an interface to also manage operations across a given vSphere deployment when it comes to install, upgrade and expansion of your existing environment. What do you think, would this be useful?

I have done a limited amount of testing across Windows and the VCSA using a couple of deployment scenarios. It is very possible that I could have missed something and if you are running into issues, it would be good to provide some details about your topology to help me further troubleshoot. I have not done any type of testing using load balancers, so it is very likely that the diagram may not be accurate for these scenarios but I would love to hear from folks if you have tried running the script in such environments.

Test driving VMware Photon Controller Part 3c: Deploying Docker Swarm

In this final article, we will now take a look at deploying a Docker Swarm Cluster running on top of Photon Controller.

test-driving-photon-controller-docker-swarm-cluster
A minimal deployment for a Docker Swarm Cluster consists of 3 Virtual Machines: 1 Masters, 1 etcd, 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need override the default VM Flavor used which is outlined in Step 1. If you have more than 16GB of memory, then you can skip Step 1 and move directly to Step 2.

Deploying Docker Swarm Cluster

Step 1 -If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2 - Download the Swarm VMDK from here

Step 3 -We will now upload our Swarm image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-swarm-vm-disk1.vmdk -n photon-swarm-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Docker Swarm Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Swarm image from the previous two steps:

./photon -n deployment enable-cluster-type cc49d7f7-b6c4-43dd-b8f3-fe17e6648d0f -k SWARM -i 13ae437d-3fd1-48a3-9d14-287b9259cbad

test-driving-photon-controller-docker-swarm-0
Step 6 -We are now ready to spin up our Docker Swarm Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Swarm Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n swarm-cluster -k SWARM --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --etcd1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-docker-swarm-1
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show 276b6934-6eb5-42fd-9fb1-031e311b3c45

test-driving-photon-controller-docker-swarm-2
At this point, you have successfully deployed a Docker Swarm Cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Master VM which we will need in the next section if you plan to explore Docker Swarm a bit more.

Exploring Docker Swarm

To interact with your newly deployed Docker Swarm Cluster, you will need to ensure that you have a Docker client that matches the Docker version running the Docker Swarm Cluster which is currently today 1.20. The easiest way is to deploy PhotonOS 1.0 TP2 using either an ISO or OVA.

To verify that you have the correct Docker client version, you can just run the following command:

docker version

test-driving-photon-controller-docker-swarm-5
Once you have verified that your Docker Client matches the version, we will go ahead and set the DOCKER_HOST variable to point to the IP Address of our Master VM which you can find above in Step 7. When you have identified the IP Address, go ahead and run the following command to set variable:

export DOCKER_HOST=tcp://192.168.1.105:8333

We can run the following command to list the Docker Containers running for our Docker Swarm Cluster:

docker ps -a

test-driving-photon-controller-docker-swarm-3
Lets go ahead and download a Docker Container which we can then use to run on our Docker Swarm Cluster. We will download the VMware PhotonOS Docker Container by running the following command:

docker pull vmware/photon

Once the Docker Container has been downloaded, we can then run it by specifying the following command:

docker run --rm -it vmware/photon

test-driving-photon-controller-docker-swarm-6
For those familiar with Docker, you can see how easily it is to interact with the Docker interface that you are familiar with. Underneath the hood, Photon Controller is automatically provisioning the necessary infrastructure needed to run your applications. This concludes our series in test driving VMware's Photon Controller. If you have made it this far, I hope you have enjoyed the series and if you have any feedback or feature enhancements on Photon Controller, be sure to file an issue on the Photon Controller Github page.

Test driving VMware Photon Controller Part 3b: Deploying Mesos

In the previous article, we demonstrated the first Cluster Orchestration solution supported by Photon Controller by deploying a fully functional Kubernetes Cluster using Photon Controller. In this article, we will now look at deploying a Mesos Cluster using Photon Controller.

test-driving-photon-controller-mesos-cluster
The minimal deployment for a Mesos Cluster in Photon Controller consists of 6 Virtual Machines: 3 Masters, 1 Zookeeper, 1 Marathon & 1 Slave. If you only have 16GB of memory on your ESXi host, then you will need to override the default VM Flavor when deploying a Mesos Cluster. If you have more than 16GB of available memory, then you can skip Step 1 and move to Step 2 directly.

Deploying Mesos Cluster

Step 1 - If you have not already created a new cluster-tiny-vm VM Flavor from the previous article that consists of 1vCPU/1GB memory, please run the following command:

./photon -n flavor create --name cluster-tiny-vm --kind "vm" --cost "vm 1 COUNT,vm.flavor.cluster-other-vm 1 COUNT,vm.cpu 1 COUNT,vm.memory 1 GB,vm.cost 1 COUNT"

Step 2- Download the Mesos VMDK from here

Step 3 - We will now upload our Mesos image and make a note of the ID that is generated after the upload completes by running the following command:

./photon -n image create photon-mesos-vm-disk1.vmdk -n photon-meos-vm.vmdk -i EAGER

Step 4 - Next, we will also need the ID of our Photon Controller Instance deployment as it will be required in the next step by running the following command:

./photon deployment list

Step 5 - We will now enable the Mesos Cluster Orchestration on our Photon Controller instance by running the following command and specifying the ID of your deployment as well as the ID of the Mesos image from the previous two steps:

./photon -n deployment enable-cluster-type 569c3963-2519-4893-969c-aed768d12623 -k MESOS -i 51c331ea-d313-499c-9d8f-f97532dd6954

test-driving-photon-controller-meso-1
Step 6 - We are now ready to spin up our Mesos Cluster by simply running the following command and substituting the network information from your environment. We are going to only deploying a single Mesos Slave (if you have additional resources you can spin up more or you can always re-size the cluster after it has been deployed). Do not forget to override the default VM Flavor used by specifying -v option and providing the name of our VM Flavor which we had created earlier called cluster-tiny-vm. You can just hit enter when prompted for the two zookeeper IP Addresses.

./photon cluster create -n mesos-cluster -k MESOS --dns 192.168.1.1 --gateway 192.168.1.1 --netmask 255.255.255.0 --zookeeper1 192.168.1.45 -s 1 -v cluster-tiny-vm

test-driving-photon-controller-meso-2
Step 7 - The process can take a few minutes and you should see a message like the one shown above which prompts you to run the cluster show command to get more details about the state of the cluster.

./photon cluster show bf962c3a-28a2-435d-bd96-0313ca254667

test-driving-photon-controller-meso-3
At this point, you have now successfully deployed a Mesos cluster running on Photon Controller. What you will be looking for in this screen is the IP Address of the Marathon VM which is the management interface to Mesos. We will need this IP Address in the next section if you plan to explore Mesos a bit more.

Exploring Mesos

Using the IP Address obtained from the previous step, you can now open a web browser and enter the following: http://[MARATHON-IP]:8080 which should launch the Marathon UI as shown in the screenshot below. If you wish to deploy a simple application using Marathon, you can follow the workflow here. Since we deployed Mesos using a tiny VM Flavor, we would not be able to exercise the final step of deploying an application running on Mesos. If you have more resources, I definitely recommend you give the workflow a try.

test-driving-photon-controller-meso-4
In our last and final article of the series, we will be covering the last Cluster Orchestration supported on Photon Controller which is Docker Swarm.

How to override the default CPU/Memory when deploying Photon Controller Management VM?

When installing Photon Controller, the resource configuration of the Management VM is sized dynamically as mentioned here based on the total available CPU, Memory and Storage on the physical ESXi host that it is being provisioned to. This is generally not a problem when deploying Photon Controller in Production with larger hosts but if you are trying to play with it in a home lab or a resource constrained environment, then this can be a challenge.

Currently, the minimal requirement to play with Photon Controller is a single physical or Nested ESXi VM that is configured with at least 4vCPU, 16GB of memory and 50GB of storage. The biggest constraint for most home labs is usually on memory. As an example, using the configuration above, the default size used for the Photon Controller Management VM is 2vCPU and 4GB of memory which is quite hefty for such a small environment. It potentially could get worse with slightly larger hosts and ultimately this impacts the amount of workload you can run on the ESXi host, especially if you only have one.

In talking to one of the Engineers on the Photon Controller team, I learned about a neat little capability that is currently only available in the Photon CLI which allows you to override the default CPU, Memory and Storage settings for the Photon Controller Management VM. The following three variables can be added to a deployment configuration YAML file which will override the default behavior.

  • MANAGEMENT_VM_CPU_COUNT_OVERWRITE - Number of vCPUs for the Management VM
  • MANAGEMENT_VM_MEMORY_GB_OVERWRITE - Amount of Memory for the Management VM (It is actually in MB even though variable says GB)
  • MANAGEMENT_VM_DISK_GB_OVERWRITE - Amount of storage for the Management VM (there seems to be a bug but property does not actually override the default storage configuration)

Note: One thing that I found while testing out this capability is that you MUST specify all three variables regardless if wish to override one or more resources. If you do not, you will see a strange 500 error  code when running the CLI. I assume this is probably a bug and have already reported this to the Engineering team.

Below are the recommended instructions if you plan to override the default configuration for the Photon Controller Management VM.

Step 1 - Open a browser to the IP Address of your Photon Controller Installer VM and go through the wizard as you normally would, but DO NOT click on the Deploy button once you are done. Instead, click on the "Export Configuration" option and save your configuration to your desktop. You can then close the Photon Controller Installer UI window as we will not be using the UI to deploy.

override-photon-controller-mgmt-vm-0
Step 2 - Open the Photon Controller deployment configuration YAML file that you had just saved in the previous step using a text editor of your choice. There will be two modifications that we will need to make. The first is by adding the following three variables under the "metadata" section towards the top and replacing the values with the ones you wish to use. I recommend using 2vCPU/2GB of memory. For storage, there seems to be a bug in which the override does not work, but you STILL MUST specify it in the configuration file else the deployment will fail. Go ahead and leave it as the default 80.

MANAGEMENT_VM_CPU_COUNT_OVERWRITE: 2
MANAGEMENT_VM_MEMORY_GB_OVERWRITE: 2048
MANAGEMENT_VM_DISK_GB_OVERWRITE: 80

Step 3 - The second modification that we need to make to the YAML file is how the datastores are listed under the image_datastores property. In the UI, it stores this property as a collection. However, when using the Photon CLI, it expects it as a string. The fix is quite simple, you just need to change the following

from

to

At this point, we are done modifying our YAML configuration file and we can save our changes and get ready to deploy.

Step 4 - You will need the Photon CLI for the remainder of the steps. If you have not downloaded the Photon CLI, take a look here for the details. Point the Photon CLI to the IP Address of your Photon Controller Installer VM by running the following command:

./photon target set http://192.168.1.250

Step 5 - We will now deploy Photon Controller using the CLI and overriding the default algorithm on how the Photon Controller Management VM is configured by running the following command and specifying the full path to your YAML file:

./photon system deploy esxcloud-installation-export-config-vghetto-sample.yaml

override-photon-controller-mgmt-vm-1
Once the deployment has started, you will be provided with a progress bar. If everything is successful, you should be able to login to your ESXi host using either the ESXi Embedded Host Client or vSphere C# Client and you should see that your Photon Controller Management VM has been deployed with the overrides you had specified earlier.

override-photon-controller-mgmt-vm-2
If you are new to Photon Controller, be sure to check out my blog series on test driving Photon Controller: