• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

container

Test driving ContainerX on VMware vSphere

06/20/2016 by William Lam 2 Comments

Over the weekend I was catching up on some of my internet readings, one of which is Timo Sugliani's excellent weekly Tech Links (highly recommend a follow). In one of his non-VMware related links (which funny enough is related to VMware), I noticed that the recent Container startup ContainerX has just made available a free version of their software for non-production use. Given part of the company's DNA included VMware, I was curious to learn more about their solution and how it works, especially as it relates to VMware vSphere which is one of the platforms it supports.

For those not familiar with ContainerX, it is described as the following:

ContainerX offers a single pane of glass for all your containers. Whether you are running on Bare Metal or VM, Linux or Windows, Private or Public cloud, you can view your entire infrastructure in one simple management console.

In this article, I will walk you through in how to deploy, configure and start using ContainerX in a vSphere environment. Although there is an installation guide included with the installer, I personally found the document to be a little difficult to follow, especially for someone who was only interested in a pure vSphere environment. The mention of bare-metal at the beginning was confusing as I was not sure what the actual requirements were and I think it would have been nice to just have a section that covered each platform from start to end.

In any case, here are high level steps that are required in setting up ContainerX for your vSphere environment:

  1. Deploy an Ubuntu (14.01/14.04) VM and install the CX Management Host software
  2. Deploy the CX Ubuntu OVA Template into the vSphere environment that will be used by the CX Management Host
  3. Configure a vSphere Elastic Cluster using the CX Management Host UI
  4. Deploy your Container/Application to your vSphere Elastic Cluster

Pre-Requisite:

  • Sign up for the free ContainerX offering here (email will contain a download link to CX Management Host Installer)
  • Access to a vSphere environment w/vCenter Server
  • Already deployed Ubuntu 14.01 or 14.04 VM (4 vCPU, 8GB vMEM & 40GB vDISK) that will be used for the CX Management Host

CX Management Host Deployment:

Step 1 - Download the CX Management Host installer for your OS desktop platform of choice. If you are using the Mac OS X installer, you will find that the cX.app fails to launch as it is not signed from an identified developer. You will need to change your security settings to allow an application which was downloaded from "anywhere" to be opened, which is a shame.

Step 2 - Accept the EULA and then select the "On Preconfigured Host" option which expects you to have a pre-installed Ubuntu VM to install the CX Management Host software. If you have not pre-deployed the Ubuntu VM, stop here and go perform that step and then come back.

test-driving-containerx-on-vsphere-1
Step 3 - Next, provide the IP Address/hostname and credentials to the Ubuntu VM that you have already pre-installed. You can use the "Test" option to verify that either the SSH password or private key that you have provided is functional before proceeding further in the installer.

test-driving-containerx-on-vsphere-2
Step 4 - After you click "Continue", the installer will remotely connect to your Ubuntu VM and start the installation of the CX Management Host software. This takes a few minutes with progress being displayed at the bottom of the screen. If the install is successful, you should see the "Install FINISHED" message.

test-driving-containerx-on-vsphere-3

Step 5 - Once the installer completes, it will also automatically open a browser and take you to the login screen of the CX Management Host UI interface (https://IP:8085). The default credentials is admin/admin

test-driving-containerx-on-vsphere-4
At this point, you have successfully deployed the CX Management Host. The next section will walk you through in setting up the CX Ubuntu Template which will be used to deploy your Containers and Applications by the CX Management Host.

Preparing the CX Ubuntu Template Deployment:

Before we can create a vSphere Elastic Cluster (EC), you will need to deploy the CX Ubuntu OVA Template which will then be used by the CX Management Host to deploy CX Docker Hosts to run your Containers/Applications. When I had originally gone through the documentation, there was a reference to the CX Ubuntu OVA but I was not able to find a download URL anywhere including going through the ContainerX's website. I had reached out to the ContainerX folks and they had updated KB article 20960087 to provide a download link, appreciate the assistance over the weekend. However, it looks like their installation documentation is still missing the URL reference. In any case, you can find the download URL below for your convenience.

Step 1 - Download the CX Ubuntu OVA Template (http://update.containerx.io:8080/cx-ubuntu.ova) and deploy (but do NOT power it on) using the vSphere Web/C# Client to the vCenter Server environment that ContainerX will be consuming.

Note: I left the default VM name which is cx-ubuntu as I am not sure if it would mess up with the initial vSphere environment discovery later in the process. It would be good to know if you could change the name.

Step 2 - Take a VM snapshot of the powered off CX Ubuntu VM before powering it on.

test-driving-containerx-on-vsphere-7

Creating a vSphere Elastic Cluster (EC) to ContainerX:

Step 1 - Click on the "Quick Wizard" button at the top and select the "vSphere Cluster" start button. Nice touch on the old school VMware logo 🙂

test-driving-containerx-on-vsphere-5
Step 2 - Enter your vCenter Server credentials and then click on the "Login to VC" button to continue.

test-driving-containerx-on-vsphere-6
Step 3 - Here you will specify the number of CX Docker Hosts and the compute, storage, and networkings resources that they will consume. The CX Docker Hosts will be provisioned using VMware Linked Clones based off of our CX Ubuntu VM Template that we had uploaded earlier. If you had skipped this step, you will find that there is not a drop down box and you will need to perform that step first before you can proceed further.

test-driving-containerx-on-vsphere-8

Note: It would have been nice if the CX Ubuntu VM was not detected, that it would automatically prompt you to deploy it without having to go back. I did not even realize this particular template was required since I was not able to find the original download link in any of the instructions.

Step 4 - An optional step, but you also have the option to create what is known as Container Pools which allow you to set both CPU and Memory limits (supports over-commitment) within your EC. It is not exactly clear how Container Pools work but it sounds like these are being applied within the CX Docker Hosts VMs?

test-driving-containerx-on-vsphere-9
Step 5 - Once you have confirmed the settings to be used for your vSphere EC, you can then click Next to being the creation. This process should not take too long and once everything has successfully been deployed, you should see a success message and a "Done" button which you can click on to close the wizard.

test-driving-containerx-on-vsphere-10
Step 6 - If we go back to our CX Management UI home page, we should now see our new vSphere EC which in my example is called "vSphere-VSAN-Cluster". There is some basic summary information about the EC, including number of Container Pools, Hosts and their utilization. You may have also noticed that there are 12 Containers being displayed in the UI which I found a bit strange given I have not deployed anything yet. I later realized that these are actually CX Docker Containers running within the CX Docker Hosts which I assuming is providing communication back to the CX Management Host. I think it would be nice to separate these numbers to reflect "Management" and actual "Application" Containers, the same goes for resource utilization information.

test-driving-containerx-on-vsphere-11

Deploying a Container on ContainerX:

Under the "Applications" tab of your vSphere EC, you can deploy either a standalone Docker Container or some of the pre-defined Applications that have been bundled as part of the CX Management Host.

test-driving-containerx-on-vsphere-12
We will start off by just deploying a very simple Docker Container. In this example, I will select my first ContainerPool-1 and then select the "A Container" button. Since we do not have a repository to select a Container to deploy, click on the "Launch a Container" button towards the top.

Note: I think I may have found a UI bug in which the Container Pool that you select in the drop down is not properly being displayed when you go deploy the Container or Application. For example, if you pick Container Pool 1, it will say that you are about to deploy to Container Pool 2. I found that you had to re-select the same drop down a second time for it to properly display and whether this is merely a cosmetic bug or its actually using the Container Pool that I did not specify.

Step 1 - Specify the Docker Image you wish to launch, if you do not have one off hand, you can use the PhotonOS Docker Container (vmware/photon) and specify a Container name. You can also add additional options using the advanced settings button such as environmental variables, network ports, Docker Volumes, etc. For this example, we will keep it simple, go ahead and click on "Launch App" button to deploy the Container.

test-driving-containerx-on-vsphere-13
Step 2 - You should see that our PhotonOS Docker Container started and then shortly after exited, not a very interesting demo but you get the idea.

test-driving-containerx-on-vsphere-14
Note: It would be really nice to be able to get the output from the Docker Container, even running a command like "uname -a" did not return any visible output that I could see from the UI.

Deploying an Application on ContainerX:

The other option is to deploy a sample application that is pre-bundled within the CX Management Host (I assume you can add your own application as it looks to be just a Docker Compose file). Select the Container Pool from the drop down that you wish to deploy the application and then click on the "An Application" button. In our example, we will deploy the WordPress application.

Step 1 - Select the application you wish to deploy by click on the "Power" icon.

test-driving-containerx-on-vsphere-21
Step 2 - Give the application a name and then click on the "Launch App" to deploy the application.

test-driving-containerx-on-vsphere-16
Step 3 - The deployment of the application can take several minutes, but one completed, you should see in the summary view like the one shown below. You can also find the details of how to reach the WordPress application that we just deployed by looking for the IP Address and the external port as highlighted below.

test-driving-containerx-on-vsphere-17
Step 4 - To verify that our WordPress application is working, go ahead and open a new browser and specify the IP Address and the port shown in the previous step and you should be taken to the initial WordPress setup screen.

test-driving-containerx-on-vsphere-18
If you need to access the CX Docker Hosts whether it is for publishing Containers/Applications by your end users or for troubleshooting purposes, you can easily access the environment information under the "Pools" tab. There is a "Download access credentials" which contains zip file containing platform specific snippets of the CX Docker hosts information.

test-driving-containerx-on-vsphere-22
Since I use a Mac, I just need to run the env.sh script and then run my normal "Docker" command (this assumes you have the Docker Beta Client for Mac OS X, else you will need a Docker Client). You can see from the screenshot below the three Docker Containers we had deployed earlier.

test-driving-containerx-on-vsphere-23

Summary:

Having only spent a short amount of time playing with ContainerX, I thought it was a neat solution. The installation of the CX Management Host was pretty quick and straight forward and I was glad to see a multi-desktop OS installer. It did take me a bit of time to realize what the actual requirement was for just a pure vSphere environment as mentioned earlier, perhaps an end-to-end document for vSphere would have cleared all this up. The UI was pretty easy to use and intuitive for the most part. I did find it strange not being able to edit any of the configurations a bit annoying and ended up deleting and re-creating some of the configurations. I would have liked an easier way to map between the Container Pools (Pools tab) and their respective CX Docker Hosts without having to download the credentials or navigate to anther tab. I also found in certain places that selection or navigation of objects was not very clear due to the subtle transition in the UI which made me think there was a display bug.

I am still trying to wrap my head around the Container Pool concept. I am not sure I understand the benefits of it or rather how the underlying resource management actually works. It seems like today, it is only capable of setting CPU and Memory limits which are applied within the CX Docker Host VMs? I am not sure if customers are supposed to create different sized CX Docker Host VMs? I was pretty surprised that I did not see more use of the underlying vSphere Resource Management capabilities in this particular area.

The overall architecture of ContainerX for vSphere looks very similiar to VMware's vSphere Integrated Containers (VIC) solution. Instead of a CX Docker Host VM, VIC has a concept of a Virtual Container Host (VCH) which is backed by a vSphere Resource Pool. VIC creates what is known as a Container VM that only contains the Container/Application running as VM, rather than in a VM. These Container VMs are instantiated using vSphere's Instant Clone capability from a tiny PhotonOS Template. Perhaps I am a bit biased here, but in addition to providing an integrated and familiar interface to each of the respective consumers: vSphere Administrators (familiar VM construct, leveraging the same set of tools with extended Docker Container info) and Developers (simply accessing the Docker endpoint with the tools they are already using), the other huge benefit of the VIC architecture is that it allows the Container VMs to benefit from all the underlying vSphere platform capabilities. vSphere Administrators can apply granular resource and policy based management on a per Container/Application basis if needed, which is a pretty powerful capability if you ask me. It will be interesting to see if there will be deeper integration from a management and operational standpoint in the future for ContainerX. 

All in all, very cool stuff from the ContainerX folks, looking forward to what comes next. DockerCon is also this week and if you happen to be at the event, be sure to drop by the VMware booth as I hear they will be showing off some pretty cool stuff. I believe the ContainerX folks will also be at DockerCon, so be sure to drop by their booth and say hello.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, vSphere Tagged With: cloud native apps, container, ContainerX, Docker, VIC, vSphere, vSphere Integrated Containers

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05/31/2016 by William Lam 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tagged With: cloud native apps, container, Docker, docker volume, esxi, nfs, vmdkops_admin.py, vmfs, VSAN

Docker Container for the Ruby vSphere Console (RVC)

11/08/2015 by William Lam 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, VSAN, vSphere 6.0 Tagged With: container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, vcsa, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

ghettoVCB VIB & offline bundle for ESXi

05/28/2015 by William Lam 53 Comments

It is still amazing to see that the number of contributions and suggestions from the community continues to grow for my free and simple VM backup solution called ghettoVCB. I created ghettoVCB almost 8 years ago which now has over 1.2 million views, pretty insane if you ask me! Although I am quite busy these days which includes a new born, I still try to find time to update the script as time permits. A couple of weeks back I received an email from one of my readers who came across ghettoVCB and was quite happy with the free solution. He also had some feedback asking why I did not provide an installable VIB for ghettoVCB?

A totally valid question and the answer was quite simple. When I had first created ghettoVCB back in the classic ESX 3.x days, the concept of a VIB had not existed yet. With the release of ESXi 5.0, the idea of the VIB was introduced but it was only recently in 2012 did VMware publish a method for customers to create custom VIBs for ESXi using the VIB Author Fling. I do have to admit at one point I did think about providing a VIB for ghettoVCB, but I guess I never went through with it for whatever reason. Looking back now, this was a no-brainer to provide a simplified user experience and not to mention the benefit of having ghettoVCB installed as a VIB is that it will automatically persist on ESXi after reboots which was a challenge for new users to ESXI.

So without further ado, here is ghettoVCB provided in either a VIB or offline bundle form:

  • vghetto-ghettoVCB.vib
  • vghetto-ghettoVCB-offline-bundle.zip

To install the ghettoVCB VIB, you just need to download the VIB and run the following ESXCLI command and specifying the full path to the VIB:

esxcli software vib install -v /vghetto-ghettoVCB.vib -f

Once installed, you will find all ghettoVCB configuration files located in:

/etc/ghettovcb/ghettoVCB.conf
/etc/ghettovcb/ghettoVCB-restore_vm_restore_configuration_template
/etc/ghettovcb/ghettoVCB-vm_backup_configuration_template

Both ghettoVCB and ghettoVCB-restore scripts are located in:

/opt/ghettovcb/bin/ghettoVCB.sh
/opt/ghettovcb/bin/ghettoVCB-restore.sh

One additional thing I would like to point out is that you can also quickly tell which version of ghettoVCB is running by inspecting the installed VIB by using the following ESXCLI command:

esxcli software vib get -n ghettoVCB

If you look at the screenshot above, I have highlighted two important pieces of information in green. The first is the "Description" property which includes the Github commit hash of the particular revision of ghettoVCB and the "Creation Date" property which contains the date of that commit. This can be handy if you want to compare it to the latest ghettoVCB repository found on Github here. Thanks again Markus for the suggestion!

For those of you who are interested in the details for creating your own ghettoVCB VIB, the next section is specifically for you. Earlier this week I blogged about a Docker Container that I have created to help build custom ESXi VIBs and as you can see now, that was the basis for us to be able to quickly create ghettoVCB VIB based on the latest revision of the script.

Step 1 - Create a new Docker Machine following the steps outlined here.

Step 2 - Login to the Docker Machine and create a new Dockerfile which contains the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM lamw/vibauthor
 
# Due to https://stackoverflow.com/a/49026601
RUN rpm --rebuilddb
RUN yum clean all
RUN yum update -y nss curl libcurl;yum clean all
 
# Download ghettoVCB VIB build script
RUN curl -O https://raw.githubusercontent.com/lamw/vghetto-scripts/master/shell/create_ghettoVCB_vib.sh && chmod +x create_ghettoVCB_vib.sh
 
# Run ghettoVCB VIB build script
RUN /root/create_ghettoVCB_vib.sh
 
CMD ["/bin/bash"]

Step 3 -  Next we need to build our new Docker Container which will use the VIB Author Container by running the following command:

docker build -t lamw/ghettovcb .

Screen Shot 2015-05-26 at 2.14.52 PMThe output will be quite verbose, but what you will be looking for is text highlighted in green as shown in the screenshot above. You should see the successful build of both the VIB and offline bundle as well as Docker Container showing a successful build.

Step 4 - After a successful build of our Docker Container, we can now launch the container by running the following command:

docker run --rm -it lamw/ghettovcb

Screen Shot 2015-05-26 at 2.16.58 PM
Once logged into the Docker Container, you will see the generated VIB and the offline bundle for ghettoVCB as shown in the screenshot above.

If you wish to copy the VIB and offline bundle out of the Docker Container into the Docker Host, you can use Docker Volumes. I found this useful thread over on Stack overflow which I have modified to include the copying of the ghettoVCB VIB and offline bundle out to Docker Host by running the following command:

docker run -i -v ${PWD}/artifacts:/artifacts lamw/ghettovcb sh << COMMANDS
cp vghetto-ghettoVCB* /artifacts
COMMANDS

Finally, to copy the ghettoVCB VIB from the Docker Host to your desktop, we first need to identify the IP Address given to our Docker Machine by running the following command:

docker-machine ip osxdock

Currently, Docker Machine does not include a simple "scp" command so we will need to use regular scp command and specify the private SSH keys which you can find by running "docker-machine inspect [NAME-OF-DOCKER-HOST]" and connecting to our Docker Host to copy the ghettoVCB VIB by running the following command:

scp -i /Users/lamw/.docker/machine/machines/osxdock/id_rsa [email protected]:artifacts/vghetto-ghettoVCB.vib .

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Docker, ESXi, Fusion Tagged With: container, Docker, docker-machine, esxi, ghettoVCB, ghettovcb-restore, vib, vib author

A Docker Container for building custom ESXi VIBs

05/26/2015 by William Lam 8 Comments

I recently had a need to create a custom ESXi VIB using the VIB Author Fling for a project that I was working on. As part of the project's deliverables, I wanted to also provide an ESXi VIB which would need to be built against any new updates for the project. Given this would be an infrequent operation, I thought why not use a Docker Container for this operation? I could just spin up a Docker Container on-demand and not have to worry about managing a local VM for just running this particular task.

With that I have created a VIB Author Docker Container which can be used to author custom ESXi VIBs. I have also made this container available on the Docker Registry for others to use which you can find more details here: https://registry.hub.docker.com/u/lamw/vibauthor/

If you already have a Docker host running, you can pull down the VIB Author Docker Container by jumping to Step 5 in the instructions below. If you do not and you are running Mac OS X like I am, you can follow the instructions below using Docker Machine and VMware Fusion to try out my VIB Author Docker Container.

Step 1 - Install the Docker client by running the following command:

brew install docker

Step 2 - Download and install Docker Machine by running the following commands:

curl -L https://github.com/docker/machine/releases/download/v0.2.0/docker-machine_darwin-amd64 > /usr/local/bin/docker-machine
chmod +x /usr/local/bin/docker-machine

Step 3 - Create Docker Machine using the VMware Fusion driver by running the following command:

docker-machine create --driver vmwarefusion osxdock --vmwarefusion-memory-size 1024
eval "$(docker-machine env osxdock)"

docker-container-vib-author-esxi-vib-0
Note: Thanks to Omer Kushmaro for his blog post here on how to quickly get started with Docker Machine with VMware Fusion

Step 4 - Once the Docker Machine is booted up, we can now connect to it using SSH by running the following command:

docker-machine ssh osxdock

docker-container-vib-author-esxi-vib-3
At this point, we are now logged into our Docker Machine which has both the Docker client/server running and we are now ready to pull down the VIB Author container from the Docker registry.

Step 5 - To pull down the VIB Author Docker Container that I have built, run the following command within the Docker Machine:

docker pull lamw/vibauthor

docker-container-vib-author-esxi-vib-1
Step 6 - Once the Docker Container has been successfully downloaded, you can now run the VIB Author Container by running the following command:

docker run --rm -it lamw/vibauthor

docker-container-vib-author-esxi-vib-2
Once logged into the VIB Author Container, you confirm that the VIB Author Fling has been installed by running the "vibauthor" command as shown in the screenshot above. In the next blog post, I will go through an example of building a custom ESXi VIB using the VIB Author Container as well as transferring the outputted files from the Docker host back onto your desktop. Stay tuned!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Docker, ESXi, Fusion Tagged With: container, Docker, docker-machine, esxi, vib, vib author

Cloud Native Apps, Containers & Docker sessions at VMworld

05/19/2015 by William Lam Leave a Comment

I just saw a nice list of VMworld sessions that was shared internally by Ben Corrie, an Engineer working in our Cloud Native Apps team at VMware on some of the VMworld sessions related to Cloud Native Apps, Containers & Docker. I figure I would share his list with a wider audience for those interested and I have also included session proposals by both VMware Employees as well as from our partners which you can find below.

I am personally excited for session 5229 - Docker and Fargo: Exploding the Linux Container Host which will be presented by both Ben & George Hicken if it gets accepted. I was fortunate enough to catch a demo of this internal project at our R&D Innovation Offsite (RADIO) last week and I think folks will be blown away with some of the work that has been done in this area. I can not say anymore other than to vote for this session and any others that you might be interested in! If there are other sessions, please let me know and I will update the list.

I had to also thrown in a shameless plug at the very bottom of this post on three of the sessions that I have submitted for VMworld this year. Hope you find them interesting to vote for and hope to see you all at VMworld!

VMware Submitted Sessions (14)

Session # Session Title
4590 Hypervisors vs. Containers - Wait That's the Wrong Question!
4600 An Interesting Application Software Based on Docker
4725 Scalable On-Demand Cloud Native Apps with Docker and Mesos
4767 Seven Things vSphere Admins Need to Know About Photon Lightwave and Containers
4853 Docker Containers on vSphere: Performance Deep Dive
4940 CIS Benchmark Compliance for Docker - Automated with VMware
5006 Monitoring Software Containers with vRealize Operations
5121 Containers as a Service with vRealize Automation and VMware Project Photon
5229 Docker and Fargo: Exploding the Linux Container Host
5266 Docker in the Real World: Tales Round the Campfire
5343 Rapid and Continuous Delivery Through Docker Container and VMware Cloud Technologies
5409 Migration of Docker Based Applications across Clouds
5627 How Do You Manage and Monetize the Docker Deployments as Containers Not VMs to Groups in Your Organization Using Your Existing vRealize Suite?
5860 Containers without Compromise: Providing Persistent Storage for Docker Containers using vSphere

Partner Submitted Sessions (8)

Session # Session Title
4742 Understanding Databases for Distributed Containerized Applications
5078 Back to the Future: What Current Container Trends Mean for the Future of Application Deployment
5321 Building Container Infrastructure for Enterprise Applications with Docker VMware Photon and vSphere
5494 TOSCA: Containers Microservices OpenStack and Orchestrating the Whole Symphony
5520 Containers on VMware Infrastructure
5907 Taming Containerized Workloads with Photon and Tectonic
6081 Are You Prepared to Contain the Container? Understand the Security and Compliance Considerations for Application Containers
6126 Containers VMs and Microservices Walk into a Bar.....

William Lam Submitted Sessions (3)

Session # Session Title
4528 vCenter Server Appliance (VCSA) Best Practices & Tips/Tricks
5106 Content Library
5278 VC Windows to VCSA Migration Fling Deep Dive
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, vRealize Suite, vSphere Tagged With: cloud native apps, container, Docker, LightWave, Photon

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy