In the previous article, we walked through the installation of govmomi which is the vSphere SDK for Go and govc which is a command-line interface that uses the SDK to expose vSphere functionality which is used in the Kubernetes vSphere Provider. Now that we have all the prerequisites installed, we are now ready to deploy a Kubernetes Cluster onto a vSphere based infrastructure.

google-kubernetes-vmware-vsphere
UPDATE (10/26/15)It looks like the instructions on setting up a Kubernetes Cluster has since changed and I have updated the instructions below. One of the main changes is instead of building from source we are just downloading the Kubernetes binaries.

tep 1 - You will need to download the latest Kubernetes binary (kubernetes.tar.gz) which can be found here. At the time of updating this article, the latest is v1.2.0-alpha2.

Step 2 - Go ahead and extract the contents of kubernetes.tar.gz file by running the following command:

tar -zxvf kubernetes.tar.gz

Step 2 - Download the Kubernetes VMDK using either "wget" or "curl" depending on what is available on your system. Since I am on a Mac, it only has curl by default. Here are the two commands depending on the which download utility you have access to:

wget https://storage.googleapis.com/govmomi/vmdk/kube.vmdk.gz{,.md5}
curl -O https://storage.googleapis.com/govmomi/vmdk/kube.vmdk.gz{,.md5}

deploy-kubernetes-on-vsphere-1
Once the download has completed, you should now see two files in your working directory: kube.vmdk.gz and kube.vmdk.gz.md5

deploy-kubernetes-on-vsphere-2
Step 3 - Next we need to un-compress the VMDK by running the following command:

gzip -d kube.vmdk.gz

Step 4 - Once the VMDK has been extracted, we will need to upload it to a vSphere datastore. Before doing so, we need to set a couple of environmental variables that provide connection details to your vSphere environment. Below are the commands to set the environmental variables, you will need to replace the information from your own environment.

export GOVC_URL='https://[USERNAME]:[PASSWORD]@[ESXI-HOSTNAME-IP]/sdk'
export GOVC_DATASTORE='[DATASTORE-NAME]'
export GOVC_DATACENTER='[DATACENTER-NAME]'
export GOVC_RESOURCE_POOL='*/Resources'
export GOVC_GUEST_LOGIN='kube:kube'
export GOVC_INSECURE=true

You can leave the last three variables as-is. The GOVC_RESOURCE_POOL defines the full path to root Resource Pool on an ESXi host which will always exists and for vCenter Server, it is the name of the vSphere Cluster or Resource Pool the GOVC_GUEST_LOGIN is the credentials to the Kubernetes Master/Node VMs which are defaulted in the VMDK that was downloaded. The last variable GOVC_INSECURE is if you have an ESXi or vCenter Server using self-signed SSL Certificate, you will need to ensure this variable is set.

To upload the kube.vmdk to the vSphere Datastore and under the kube directory which will be created for you, you will run the following command:

govc datastore.import kube.vmdk kube

deploy-kubernetes-on-vsphere-3
Step 5 - We now have our base kube.vmdk uploaded to our ESXi host and before we are ready to deploy our Kubernetes Cluster, we need to set the provider by running the following command:

export KUBERNETES_PROVIDER=vsphere

Step 6 - We are now ready to deploy the Kubernetes Cluster which consists of Kubernetes Master and 4 Kubernetes Minions and they will be derived from the kube.vmdk that we just uploaded. To do so, you will run the following command:

kubernetes/cluster/kube-up.sh

deploy-kubernetes-on-vsphere-6
Note: If you see a message about "Docker failed to install on kubernetes-minion-N" it is possible that this related to a timing issue in which the Minion may not be up when the Master is checking. You can verify this by running the next command, else you can follow the instructions to bring down the Kubernetes Cluster and re-creating it.

Step 7 - In the previous step, we deployed the Kubernetes Cluster and you will see the assigned IP Addresses for the Master/Minions along with the credentials (auto-generated) for the Docker Containers. We can confirm that the everything was created successfully by checking the number of running Minions by running the following command:

cluster/kubecfg.sh list minions

Screen Shot 2014-09-04 at 9.52.13 PM
Step 8 - Once we have confirmed we have 4 running Minions, we can now deploy a Docker Container onto our Kubernetes Cluster. Here is an example of deploying 2 nginx mapping from port 8080 to 80 instances by running the following command:

cluster/kubecfg.sh -p 8080:80 run dockerfile/nginx 2 myNginx

deploy-kubernetes-on-vsphere-8
Step 9 - We should expect to see two "Pods" for the nginx instances we have instantiated and we can do so by running the following command:

cluster/kubecfg.sh list pods

deploy-kubernetes-on-vsphere-9
Here are some additional commands to stop and remove the Pods:

cluster/kubecfg.sh stop myNginx
cluster/kubecfg.sh rm myNginx

You can also bring down the Kubernetes Cluster (which will destroy the Master/Minion VMs) by running the following command:

cluster/kube-down.sh

Hopefully this gave you a good introduction to the new Kuberenetes vSphere Provider and I would like to re-iterate that this is still being actively developed on and the current build is an Alpha release. If you have any feedback/requests or would like to contribute, be sure to check out the Kubernetes and govmomi Github repository and post your issues/pull requests.

15 thoughts on “How to deploy a Kubernetes Cluster on vSphere?

  1. Thanks for the excellent Blog. I followed all the steps mentioned above but I am getting the error “Specified Parameter was not correct”. Checking the vpxd.log I got the following message;

    2014-11-05T15:19:41.438+05:30 [00236 error ‘Default’ opID=8aeb3241] Section for VMware VirtualCenter, pid=2640, version=5.1.0, build=1235232, option=Release
    –>
    2014-11-05T15:19:41.459+05:30 [00236 info ‘commonvpxLro’ opID=8aeb3241] [VpxLRO] — FINISH task-126400 — group-v3 — vim.Folder.createVm —
    2014-11-05T15:19:41.459+05:30 [00236 info ‘Default’ opID=8aeb3241] [VpxLRO] — ERROR task-126400 — group-v3 — vim.Folder.createVm: vmodl.fault.InvalidArgument:
    –> Result:
    –> (vmodl.fault.InvalidArgument) {
    –> dynamicType = ,
    –> faultCause = (vmodl.MethodFault) null,
    –> invalidProperty = ,
    –> msg = “A specified parameter was not correct.
    –> “,
    –> }
    –> Args:
    –>

    It will be very helpful if you can provide the necessary help.

    • Hm, I’m not sure why you might be seeing that error. Usually this has to do with something not being passed correctly into the CreateVM spec. Though I can see you’re on 5.1, I don’t believe the plugin requires 5.5. You may want to post in the Issues section of the govmomi and see if one of the developers might be able to help you debug or confirm you need vSphere 5.5

      • I’ve just ran into this same problem, also on 5.1 and no easy way to get it upgraded. I found the problem is simply that the Kubernetes scripts are trying to create VM’s as type ‘debian7_64Guest’ and this version of vSphere doesn’t like that. I just edited the ‘cluster/vsphere/config-default.sh’ script to use ‘debian6_64Guest’ instead and I don’t get the error anymore.

  2. Thanks for the great writeup! Unfortunately, I don’t get past “cluster/kube-up.sh”
    cluster/kube-up.sh
    Starting cluster using provider: vsphere
    … calling verify-prereqs
    … calling kube-up
    Starting master VM (this can take a minute)…

    at which point the master server is created in vsphere, with only an IPv6 address, and the process exits. No errors anywhere…

  3. Hi, thanks for taking the time to write up this tutorial.
    I have been able to get up to executing the relaase.sh script. However my version of kubernetes didn’t have the script in release/, but in build/. I ran it, but needed to do it with sudo on my machine, because my user could not connect to the docker machine. When I ran it with sudo, the build succeeded.

    However now I am stuck at kube-up.sh. Here’s the result of launching kube-up.sh:

    $cluster/kube-up.sh
    … Starting cluster using provider: vsphere
    … calling verify-prereqs
    … calling kube-up
    Identity added: /home/rc/.ssh/id_rsa (/home/rc/.ssh/id_rsa)
    Starting master VM (this can take a minute)…
    Error: datastore file does not exist

    I have imported kube.vmdk in my datastore.
    govc about is giving me a similar result as yours:
    ~$ govc about
    Name: VMware ESXi
    Vendor: VMware, Inc.
    Version: 5.0.0
    Build: 469512
    OS type: vmnix-x86
    API type: HostAgent
    API version: 5.0
    Product ID: embeddedEsx
    UUID:

    I have
    GOVC_DATASTORE=datastore1

    and when I run:
    $ govc ls /ha-datacenter/datastore
    /ha-datacenter/datastore/datastore1

    I have not been able to find any hints by searching online.
    Do you have an idea I could try?

  4. I moved the vmdk to the ./kube folder in the datastore, and now I get another error:

    cluster/kube-up.sh
    … Starting cluster using provider: vsphere
    … calling verify-prereqs
    … calling kube-up
    Identity added: /home/rc/.ssh/id_rsa (/home/rc/.ssh/id_rsa)
    Starting master VM (this can take a minute)…
    Error: An error occurred during host configuration.

  5. I get the point of after uploading the kube.vmdk and building the binaries. After that I have a bad time.

    shell> hack/build-go.sh
    cmd/genconversion
    cmd/gendeepcopy
    cmd/genswaggertypedocs
    examples/k8petstore/web-server/src
    github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
    +++ [1025 16:23:14] Placing binaries

    shell> release/build-release.sh kubernetes
    -bash: release/build-release.sh: No such file or directory

  6. After kickstart the kube-up.sh script, everything seems ok until after “Salt installed!” successfully – it stuck waiting the curl https:///healthz to be valid, but it seems the health service never started and listening on any ports. salt-master and salt-minion are as daemon running on master and minions. but didn’t see any kubernetes files or scripts copied anywhere. Anyone have same issue? thanks

  7. I’m not using resource pools with VMware. I tried various strings for GOVC_RESOURCE_POOL and finally unset GOVC_RESOURCE_POOL. Nothing seems to work. I keep getting the error: default resource pool resolves to multiple instances, please specify

    Suggestions?

  8. Hi!
    Thanks for the great post William! However, my deployment fails… any ideas why?

    root@kube:/home/kube/kubernetes/kubernetes# KUBERNETES_PROVIDER=vsphere cluster/kube-up.sh
    … Starting cluster using provider: vsphere
    … calling verify-prereqs
    … calling kube-up
    Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
    Starting master VM (this can take a minute)…
    kubernetes-server-linux-amd64.tar.gz 100% 351MB 873.0KB/s 06:52

    kubernetes-salt.tar.gz 100% 44KB 44.2KB/s 00:00

    uploaded /tmp/kubernetes.fgu5oh/master-start.sh to /tmp/master-start.sh
    Using master: kubernetes-master (external IP: 10.0.30.150)

    Starting node VMs (this can take a minute)…
    uploaded /tmp/kubernetes.fgu5oh/node-start-1.sh to /tmp/node-start-1.sh
    uploaded /tmp/kubernetes.fgu5oh/node-start-0.sh to /tmp/node-start-0.sh
    uploaded /tmp/kubernetes.fgu5oh/node-start-3.sh to /tmp/node-start-3.sh
    uploaded /tmp/kubernetes.fgu5oh/node-start-2.sh to /tmp/node-start-2.sh
    Found kubernetes-minion-1 at 10.0.30.151
    Found kubernetes-minion-2 at 10.0.30.154
    Found kubernetes-minion-3 at 10.0.30.155
    Found kubernetes-minion-4 at 10.0.30.152
    Waiting for salt-master to be up on kubernetes-master …
    This may take several minutes. Bound to 60 attempts….
    [salt-master running]

    Waiting for all packages to be installed on kubernetes-master …

    This may take several minutes. Bound to 60 attempts……………………………………………………
    (Failed) rc: 1 Output:

    sudo salt “kubernetes-master” state.highstate -t 30 | grep -E “Failed:[[:space:]]+0” failed to start on 10.0.30.150. Your cluster is unlikely to work correctly. You may have to debug it by logging in.

Thanks for the comment!