• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Docker

How to update AppCatalyst’s default PhotonOS VM template w/Docker 1.9?

12/19/2015 by William Lam 9 Comments

For those of you using VMware's free AppCatalyst Hypervisor, you may have noticed that there is currently not a way to update to the latest Docker 1.9 Engine/Client using Photon's package manager, Tiny Dandified YUM (tndf). Although you can manually install Docker 1.9 on any deployed PhotonOS VMs from AppCatalyst, the only issue with that is that all VMs based off of the default PhotonOS template will still be running version 1.8, which means you would need to apply to do this on ever new VM creation.

Not ideal at the moment, but there is a simple fix which is to update the default PhotonOS template with Docker 1.9. We can easily this simply by leveraging AppCatalyst itself to update itself 😉 or rather its default PhotonOS VMDK. Below are the manual steps if you wish to walk through this yourself, but if you rather just run a simple script that will compeltely automate the entire process, just jump straight to the bottom of this article 🙂

Step 1 - Create a new temp PhotonOS VM (in my example, I'm calling it PHOTON-DOCKER-1.9) which we will use to install Docker 1.9. You can do so by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vm create PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-1
Step 2 - Power on the temp PhotonOS VM that you just created by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vmpower on PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-2
Step 3 - Retrieve the IP Address of the temp PhotonOS by running the following command (this could take up to a minute or so to display):

/opt/vmware/appcatalyst/bin/appcatalyst guest getip PHOTON-DOCKER-1.9

update-docker-client-to-19-in-appcatalyst-photonos-template-3
Step 4 - We can now SSH to the temp PhotonOS by using the IP Address from the previous step. You will need to specify the private key to login to the PhotonOS by running the following command:

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_key [email protected][IP-ADDRESS]

update-docker-client-to-19-in-appcatalyst-photonos-template-4
Step 5 - Now we will download and install Docker 1.9 but before doing so we also need to grab the "tar" utility as it does not seem to be part of the default PhotonOS. Here is a one-liner that will automatically install the tar utility and then perform the download and install of Docker (Thanks to Massimo Re'Ferre for his original install steps, just polishing up his awesomeness). Copy and paste the snippet below which you will run that inside of the PhotonOS:

sudo tdnf -y install tar;curl -O https://get.docker.com/builds/Linux/x86_64/docker-1.9.1.tgz;tar -zxvf docker-1.9.1.tgz;sudo systemctl stop docker;sudo cp usr/local/bin/docker /usr/bin/docker;sudo systemctl start docker;rm -rf usr/;rm -f docker-1.9.1.tgz;exit

update-docker-client-to-19-in-appcatalyst-photonos-template-5
Step 6 - Now that we have our PhotonOS updated with Docker 1.9, we just need to shut it down and replace the VMDK with AppCatalyst's default PhotonOS VMDK. Run the following command to find the name of your temp PhotonOS VMDK.

find "${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9" -name '*.vmdk'

Once you have the VMDK name, you should backup the original AppCatalyst's PhotonOS VMDK by running the following command:

sudo mv /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk.bak

Next, we will copy our temp PhotonOS VMDK to AppCatalyst's default PhotonOS directory and replace its original VMDK by running the following command:

sudo cp ${HOME}/Documents/AppCatalyst/PHOTON-DOCKER-1.9/photon-disk1-cl2.vmdk /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Finally, we need to make sure the new VMDK has the correct permissions by running the following command:

sudo chmod 644 /opt/vmware/appcatalyst/photonvm/photon-disk1-cl1.vmdk

Step 7 - At this point, you can verify that your changes were successful by creating a new PhotonOS VM and once logged into the VM, you can run "docker version" to verify that you are now running Docker 1.9 as shown in the screenshot below.

update-docker-client-to-19-in-appcatalyst-photonos-template-6
For those those of you who do not wish to go through the manual steps, here is a quick script called update_docker_client_in_appcatalyst.sh which automates this entire process describe in the instructions above. Here is a quick screenshot of a sample execution of the script. I am hoping the Photon team will be updating the tdnf online repo so that you can simple run yum -y update docker in the future.

update-docker-client-to-19-in-appcatalyst-photonos-template-7

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Automation, Cloud Native, Docker Tagged With: appcatalyst, Docker, Photon, tdnf

Docker Container for the Ruby vSphere Console (RVC)

11/08/2015 by William Lam 2 Comments

The Ruby vSphere Console (RVC) is an extremely useful tool for vSphere Administrators and has been bundled as part of vCenter Server (Windows and the vCenter Server Appliance) since vSphere 6.0. One feature that is only available in the VCSA's version of RVC is the VSAN Observer which is used to capture and analyze performance statistics for a VSAN environment for troubleshooting purposes.

For customers who are still using the Windows version of vCenter Server and wish to leverage this tool, it is generally recommended that you deploy a standalone VCSA just for the VSAN Observer capability which does not require any additional licensing. Although it only takes 10 minutes or so to setup, having to download and deploy a full blown VCSA to just use the VSAN Observer is definitely not ideal, especially if you are resource constrained in your environment. You also may only need the VSAN Observer for a short amount of time, but it could take you longer to deploy and in a troubleshooting situation, time is of the essence.

I recently came across an internal Socialcast thread and one of the suggestion was why not build a tiny Photon OS VM that already contained RVC? Instead of building a specific Photon OS that was specific to RVC, why not just create a Docker Container for RVC? This also means you could pull down the Docker Container from Photon OS or any other system that has Docker installed. In fact, I had already built a Docker Container for some handy VMware Utilities, it would be simple enough to just have an RVC Docker Container.

The one challenge that I had was that the current RVC github repo does not contain the latest vSphere 6.x changes. The fix was simple, I just copied the latest RVC files from a vSphere 6.0 Update 1 deployment of the VCSA (/opt/vmware/rvc and /usr/bin/rvc) and used that to build my RVC Docker Container which is now hosted on Docker Hub here and includes the Dockerfile in case someone was interested in how I built it.

To use the RVC Docker Container, you just need access to a Linux Container Host, for example VMware Photon OS which can be deployed using an ISO or OVA. For instructions on setting that up, please take a look here which should only take a minute or so. Once logged in, you just need to run the following commands to pull down the RVC Docker Container and to star the container:

docker pull lamw/rvc
docker run --rm -it lamw/rvc

ruby-vsphere-console-docker-container-1
As seen in the screenshot above, once the Docker Container has started, you can then access RVC like you normally would. Below is an quick example of logging into one of my VSAN environments and using RVC to run the VSAN Health Check command.

ruby-vsphere-console-docker-container-0
If you wish to run the VSAN Observer with the live web server, you will need to map the port from the Linux Container Host to the VSAN Observer port which runs on 8010 by default when starting the RVC Docker Container. To keep things simple, I would recommend mapping 80->8010 and you would run the following command:

docker run --rm -it -p 80:8010 lamw/rvc

Once the RVC Docker Container has started, you can then start the VSAN Observer with --run-webserver option and if you connect to the IP Address of your Linux Container Host using a browser, you should see the VSAN Observer Stats UI.

Hopefully this will come in handy for anyone who needs to quickly access RVC.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, VSAN, vSphere 6.0 Tagged With: container, Docker, Photon, ruby vsphere console, rvc, vcenter server appliance, vcsa, vcva, VSAN, VSAN 6.1, vSphere 6.0 Update 1

How Wrecking Crew Inc. leveraged vSphere’s Instant Clone to instantly provision hundreds of VMs

09/18/2015 by William Lam 9 Comments

Okay, Wrecking Crew Inc. is not a real company but a fictitious one that was used during Alan Renouf and Li Zheng's VMworld session, #INF5803 - Deploy Hundreds of VMs Instantly via Forking (aka vSphere Instant Clone). During the session, Wrecking Crew Inc. was described as a modern online gaming company who was about to launch a new product. They found out that their game was going to be more popular than they had originally anticipated and needed to be able to quickly spin up a large number of instances of their game to handle the load.

Though this was a fictional company to help set the stage for the demo, the underlying use case is an actual challenge for many of our customers. One of the examples that Alan used was the traditional Black Friday sale which occurs on the Friday after Thanksgiving in US. As you can imagine, being able to quickly scale up your online application based on customer demand can really give retailers an real edge over their competition. In todays world, waiting a few additional seconds for a site to load can mean the difference between a sale and customer going else where for their business. Online retail is just one of the many industries and verticals that can easily benefit from the Instant Clone capability.

To help further demonstrate the use case, a recorded demo was shown utilizing the new PowerCLI Extensions which provides access to the Instant Clone feature from an Automation standpoint. I actually built the demo for Alan and Li's session and I thought I would share some more details along with the video in case you were not able to attend the session in person, which I was not able to do.

Below are the set of technologies that were used in the demo:

  • VMware vSphere + Virtual SAN
  • Intel Hardware
  • VMware Photon
  • HashiCorp Consul
  • Nginx
  • Registrator
  • Docker
  • Node.js

wrecking-crew-inc-using-instant-clone-to-provisioning-hundreds-to-thousands-of-vms-0
I wanted to also give a quick shoutout to my buddy Rawlinson Rivera for his help with Intel and getting us access to their 64 Node NVMe VSAN Cluster. Our friends over at Intel were kind enough to allow us to quickly record a demo before they had to pack and ship the hardware to San Francisco for VMworld.

To provide some additional context on what you will see in the video, a diagram is provided below. The environment initially consists of two Photon VMs which runs the Consul Cluster which provides service discovery and configuration capabilities and the Nginx Load Balancer which is highlighted in the purple and blue box. Each of those VMs run their respective application which runs inside of a Docker Container and is highlighted in the smaller boxes. On each ESXi host, we have Photon VM which contains our Node.js application that we will be Instant Cloning from. This Photon VM is also referred to as the Parent VM and where all the new forked children VMs will be spawn off of.

This Photon VM also runs an instance of Consul, Registrator and the Node.js application inside of a Docker Container. As new instances are brought online which is highlighted in the black box, they will automatically register themselves with the Consul Cluster. As the Docker Containers are started up, Registrator will be notified to automatically add the new instances to the Nginx Load Balancer, making our application servers available for use immediately. When our application server powers down, Registrator will automatically detect that the Docker Container for our Node.js application is no longer running and automatically unregister it from the Nginx Load Balancer. To simply scale up or down the application, it is simply Instant Cloning our Photon (Parent) VM and powering it on or off, there are no additional steps required.

wrecking-crew-inc-using-instant-clone-to-provisioning-hundreds-to-thousands-of-vms-1

Without further ado, here is the video of the demo. If you want additional commentary and you attended VMworld US but could not make it to Alan and Li's session, you can now watch it online here. This session will also be repeated in VMworld EMEA for those attending in a couple of weeks. I would highly recommend you check out the session as there is a lot of awesomeness in the session along with technical deep dive of the Instant Clone technology.

VMworld 2015 Demo of how Wrecking Crew Inc. leverages vSphere's Instant Clone feature from lamw on Vimeo.

For more details about Instant Clone, be sure to check out these resources below which includes a Instant Clone script repository for several GuestOSes, including Photon which was used in the demo:

  • Project Fargo aka VMFork – What is it?
  • Project Fargo aka VMFork and TPS?
  • Instant Clone PowerCLI cmdlets Best Practices & Troubleshooting
  • Instant Clone community customization script repository
  • How to VMFork aka Instant Clone Nested ESXi?
  • VMware Instant Clone is now at your fingertips with the updated PowerCLI Extensions fling!
  • Using VMware Instant Clone via PowerCLI Extensions Fling
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Cloud Native, Docker, VSAN, vSphere 6.0 Tagged With: consul, Docker, instant clone, nginx, node.js, Photon, PowerCLI, registrator, Virtual SAN, vmfork, VSAN, vSphere 6.0

Docker Container for testing vSphere 3rd Party Content Library

09/09/2015 by William Lam Leave a Comment

Last week at VMworld, I co-presented a Technical Deep Dive on vSphere's Content Library feature (INF5106) which was first introduced in our vSphere 6.0 release. One of the demos that I showed case was the 3rd Party Content Library capability which allows you to publish your own Content Library without the need of a vCenter Server. For those of you who attended in person, you may have recalled that I had used a Docker Container to quickly standup an Nginx endpoint for hosting my 3rd Party Content Library.

I have just published my Docker Container called lamw/tp-content-library-demo on Dockerhub . If you wish to build the Docker Container yourself, you can take a look at my Github project vmworld2015-3rd-party-content-library for more details. You can also subscribe to my other 3rd Party Content Library which includes variety of Nested ESXi OVF Templates, for more details you can take a look at the blog post here.

Requirements:

  • Linux Container Host for running the Docker Container like VMware Photon for example
  • vSphere 6.0 environment

Instructions:

Step 1 - Download the 3rd Party Content Library Docker Container by running the following command:

docker pull lamw/tp-content-library-demo

vmworld-tp-content-library-demo-0
Step 2 - Start the Docker Container by running the following command:

docker run -d -p 80:80 lamw/tp-content-library-demo

vmworld-tp-content-library-demo-1
Step 3 - Verify that the Nginx webserver is properly running by visiting the following URL (replace with the IP Address/Hostname of your Container Host): http://192.168.1.143/vghetto/

vmworld-tp-content-library-demo-22
If everything was setup correctly, you should see a variety of files from our sample 3rd Party Content Library along with the various JSON metadata files describing the library itself.

Step 4 - To subscribe to the 3rd Party Content Library, go ahead and create a new Content Library using the vSphere Web Client and start off by specifying the name of the Content Library you wish to create.

vmworld-tp-content-library-demo-3
Step 5 - Next, go select the "Subscribed content library" option and paste in the following URL: http://[CONTAINER-HOST-IP]/vghetto/lib.json which is the 3rd Party Content Library endpoint running in our Docker Container.

vmworld-tp-content-library-demo-4
Step 6 - Lastly, go ahead and select a storage backing for the library. In this case, I have selected a vSphere Datastore and then click finish to create the library.

vmworld-tp-content-library-demo-5
Once the Content Library has successfully been created, we can then click into it to see that we are no subscribing to the 3rd Party Content Library that we had just hosted on our Nginx Docker Container as seen in the screenshot below.

vmworld-tp-content-library-demo-6
If you are interested to learning more about the Content Library feature, we will be repeating the Content Library Technical Deep Dive session at VMworld EMEA for those of you who will be attending. Hope to see you there!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Docker, VMworld, vSphere 6.0 Tagged With: content library, Docker, nginx, vmworld

Quickly getting started with VMware AppCatalyst & AppCatalyst Vagrant Plugin

06/24/2015 by William Lam 5 Comments

At Dockercon this week, the Cloud-Native Apps team at VMware introduced two new Tech Previews: VMware AppCatalyst and Project Bonneville. In addition, I also found out today that Fabio Rapposelli who works over in the CNA team has also released a Vagrant Plugin for VMware Catalyst as well. Having spent a couple of days playing with AppCatalyst before it was released, I thought this would be a good opportunity to show how to quickly get started with AppCatalyst but also to try out the new Vagrant Plugin that had just been released.

Before jumping in, What exactly is VMware AppCatalyst? It is a slimmed down desktop Hypervisor based on VMware Fusion that has been optimized for Developers who want to be able to quickly spin up Virtual Machines to run Docker Containers. Included with AppCatalyst is also VMware Photon, an open-source lightweight Linux container host that is used to quickly instantiate new VMs that are ready for running and building Docker Containers. Instead of a traditional GUI like VMware Fusion, AppCatalyst's is consumed completely via a REST API or CLI, which is also ideal for integrating with other 3rd Party Automation tools like Vagrant. Best of all, both VMware AppCatalyst & AppCatalyst Vagrant Plugin is completely Free!

Requirements:

  • Mac OS X 10.9.4 or 10.10
  • VMware Fusion must not be running

Quick INFO:

  • The AppCatalyst configuration file is located in ~/.appcatalyst.conf (if you wish to change system defaults)
  • The CLI to AppCatalyst is located at /opt/vmware/appcatalyst/bin/appcatalyst
  • The daemon for the AppCatalyst REST API is located at /opt/vmware/appcatalyst/bin/appcatalyst-daemon
  • SSH keys to the Photon VM is located in /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key
  • Additional AppCatalyst Documentation can be found here

Exploring Appcatalyst CLI:

Step 1 - Download and install VMware AppCatalyst from here

Step 2 - Once AppCatalyst has been installed, you can explore the CLI and view the list of operations by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst

vmware-appcatalyst-vagrant-plugin-1
Step 3 - To create our first VM using AppCatalyst, run the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vm create vGhetto1

vmware-appcatalyst-vagrant-plugin-2
Note: You will see that all VMs created by AppCatalyst will be stored in /Users/[USER]/Documents/AppCatalyst and you can change this by editing DEFAULT_VM_PATH in AppCatalyst configuration file

Step 4 - To power on our VM, run the following command:

/opt/vmware/appcatalyst/bin/appcatalyst vmpower on vGhetto1

vmware-appcatalyst-vagrant-plugin-3
Note: If you run into problems powering on your VM, there is a good chance that you may still have a VM that is running in VMware Fusion (also check docker-machine in case you are using that to power off any VMs provisioned using that tool)

Step 5 - Once the VM is powered on, we will probably want to retrieve the IP Address which could take a few seconds to be displayed by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst guest getip vGhetto1

Step 6 - Once we have the IP Address of our VM, we can then SSH to it using the SSH keys included with AppCatalyst by running the following command:

ssh -i /opt/vmware/appcatalyst/etc/appcatalyst_insecure_ssh_key [email protected]

vmware-appcatalyst-vagrant-plugin-5
At this point, you have successfully deployed a VM based on VMware Photon image using the AppCatalyst's CLI. You can now login and start running and building Docker Containers as Docker is already pre-installed on Photon. Next we will take look at using the AppCatalyst API.

Exploring Appcatalyst API:

UPDATE (01/22/16) - For the complete cheatsheet of using AppCatalyst API using cURL, check out this article here for more examples.

Step 1 - To use the AppCatalyst REST API, you will need to run the AppCatalyst Daemon by running the following command:

/opt/vmware/appcatalyst/bin/appcatalyst-daemon

vmware-appcatalyst-vagrant-plugin-6
Note: You can also run the AppCatalyst daemon in the background by adding '&' to the command

Step 2 - You can easily view the AppCatalyst API by opening a browser to http://localhost:8080 (assuming you did not change the port) which is built using Swagger. You can explore and even test the API using this interface if you have not worked with a Swagger interface before.

vmware-appcatalyst-vagrant-plugin-7
Step 3 - Lets quickly test the GET /vms operation which will list the VMs that are being managed by AppCatalyst. We will be using cURL by running the following command:

curl http://localhost:8080/api/vms

vmware-appcatalyst-vagrant-plugin-8
As long as the AppCatalyst daemon is running, you will be able to interact with the REST API using a variety of methods including the examples I have shown above.

Note: A known issue is that VMs powered on using the AppCatalyst API can not be managed by the CLI until they have been powered down using the API. Hopefully this issue will be resolved in a future update.

Exploring Appcatalyst Vagrant Plugin:

Step 1 - Ensure you have Vagrant already installed on your system, if not you can download it here.

Step 2 - Install the Vagrant Plugin by running the following command:

vagrant plugin install vagrant-vmware-appcatalyst

vmware-appcatalyst-vagrant-plugin-9
Step 3 -  You can either run "vagrant init" or manually create a file name Vagrant file that contains the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Set our default provider for this Vagrantfile to 'vmware_appcatalyst'
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_appcatalyst'
 
nodes = [
  { hostname: 'gantry-test-1', box: 'vmware/photon' },
  { hostname: 'gantry-test-2', box: 'vmware/photon' }
]
 
Vagrant.configure('2') do |config|
 
  # Configure our boxes with 1 CPU and 384MB of RAM
  config.vm.provider 'vmware_appcatalyst' do |v|
    v.vmx['numvcpus'] = '1'
    v.vmx['memsize'] = '384'
  end
 
  # Go through nodes and configure each of them.j
  nodes.each do |node|
    config.vm.define node[:hostname] do |node_config|
      node_config.vm.box = node[:box]
      node_config.vm.hostname = node[:hostname]
    end
  end
end

The above Vagrant file will create two VMs called gantry-test-{1,2} using the VMware Photon Box hosted on Atlas by HashiCorp.

Step 4 (Optional) - If you decide to use the example above with VMware Photon Box, you will need to install an additional Vagrant Plugin for managing Photon Guest by running the following command:

vagrant plugin install vagrant-guests-photon

Step 5 - Ensure that the AppCatalyst daemon is running before proceeding with the next step as the Vagrant Plugin uses the AppCatalyst REST API.

Step 6 - To start the deployment, run the following command:

vagrant up --provider=vmware_appcatalyst

Screen Shot 2015-06-24 at 2.11.20 PM
Step 7 - To login to one of the VMs that have been created by Vagrant, simply run the following command and specify the name of the VM to SSH to:

vagrant ssh gantry-test-1

vmware-appcatalyst-vagrant-plugin-11
Step 8 - Finally, we can also confirm that the VMs created by the AppCatalyst Vagrant Plugin is also visible using just the AppCatalyst REST API and we can perform another "GET" operation to verify the two VMs that we had just created.

vmware-appcatalyst-vagrant-plugin-12
Hopefully this quick primer has been useful in getting you introduced and started with VMware AppCatalyst as well as the new AppCatalyst Vagrant Plugin. If you have any feedback or questions, feel free to leave a comment in the VMware AppCatalyst Forum and for feedback/questions on AppCatalyst Vagrant Plugin, you can file issues here.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Automation, Cloud Native, Docker Tagged With: appcatalyst, cloud native apps, DevOps, Docker, Photon, Vagrant

ghettoVCB VIB & offline bundle for ESXi

05/28/2015 by William Lam 53 Comments

It is still amazing to see that the number of contributions and suggestions from the community continues to grow for my free and simple VM backup solution called ghettoVCB. I created ghettoVCB almost 8 years ago which now has over 1.2 million views, pretty insane if you ask me! Although I am quite busy these days which includes a new born, I still try to find time to update the script as time permits. A couple of weeks back I received an email from one of my readers who came across ghettoVCB and was quite happy with the free solution. He also had some feedback asking why I did not provide an installable VIB for ghettoVCB?

A totally valid question and the answer was quite simple. When I had first created ghettoVCB back in the classic ESX 3.x days, the concept of a VIB had not existed yet. With the release of ESXi 5.0, the idea of the VIB was introduced but it was only recently in 2012 did VMware publish a method for customers to create custom VIBs for ESXi using the VIB Author Fling. I do have to admit at one point I did think about providing a VIB for ghettoVCB, but I guess I never went through with it for whatever reason. Looking back now, this was a no-brainer to provide a simplified user experience and not to mention the benefit of having ghettoVCB installed as a VIB is that it will automatically persist on ESXi after reboots which was a challenge for new users to ESXI.

So without further ado, here is ghettoVCB provided in either a VIB or offline bundle form:

  • vghetto-ghettoVCB.vib
  • vghetto-ghettoVCB-offline-bundle.zip

To install the ghettoVCB VIB, you just need to download the VIB and run the following ESXCLI command and specifying the full path to the VIB:

esxcli software vib install -v /vghetto-ghettoVCB.vib -f

Once installed, you will find all ghettoVCB configuration files located in:

/etc/ghettovcb/ghettoVCB.conf
/etc/ghettovcb/ghettoVCB-restore_vm_restore_configuration_template
/etc/ghettovcb/ghettoVCB-vm_backup_configuration_template

Both ghettoVCB and ghettoVCB-restore scripts are located in:

/opt/ghettovcb/bin/ghettoVCB.sh
/opt/ghettovcb/bin/ghettoVCB-restore.sh

One additional thing I would like to point out is that you can also quickly tell which version of ghettoVCB is running by inspecting the installed VIB by using the following ESXCLI command:

esxcli software vib get -n ghettoVCB

If you look at the screenshot above, I have highlighted two important pieces of information in green. The first is the "Description" property which includes the Github commit hash of the particular revision of ghettoVCB and the "Creation Date" property which contains the date of that commit. This can be handy if you want to compare it to the latest ghettoVCB repository found on Github here. Thanks again Markus for the suggestion!

For those of you who are interested in the details for creating your own ghettoVCB VIB, the next section is specifically for you. Earlier this week I blogged about a Docker Container that I have created to help build custom ESXi VIBs and as you can see now, that was the basis for us to be able to quickly create ghettoVCB VIB based on the latest revision of the script.

Step 1 - Create a new Docker Machine following the steps outlined here.

Step 2 - Login to the Docker Machine and create a new Dockerfile which contains the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM lamw/vibauthor
 
# Due to https://stackoverflow.com/a/49026601
RUN rpm --rebuilddb
RUN yum clean all
RUN yum update -y nss curl libcurl;yum clean all
 
# Download ghettoVCB VIB build script
RUN curl -O https://raw.githubusercontent.com/lamw/vghetto-scripts/master/shell/create_ghettoVCB_vib.sh && chmod +x create_ghettoVCB_vib.sh
 
# Run ghettoVCB VIB build script
RUN /root/create_ghettoVCB_vib.sh
 
CMD ["/bin/bash"]

Step 3 -  Next we need to build our new Docker Container which will use the VIB Author Container by running the following command:

docker build -t lamw/ghettovcb .

Screen Shot 2015-05-26 at 2.14.52 PMThe output will be quite verbose, but what you will be looking for is text highlighted in green as shown in the screenshot above. You should see the successful build of both the VIB and offline bundle as well as Docker Container showing a successful build.

Step 4 - After a successful build of our Docker Container, we can now launch the container by running the following command:

docker run --rm -it lamw/ghettovcb

Screen Shot 2015-05-26 at 2.16.58 PM
Once logged into the Docker Container, you will see the generated VIB and the offline bundle for ghettoVCB as shown in the screenshot above.

If you wish to copy the VIB and offline bundle out of the Docker Container into the Docker Host, you can use Docker Volumes. I found this useful thread over on Stack overflow which I have modified to include the copying of the ghettoVCB VIB and offline bundle out to Docker Host by running the following command:

docker run -i -v ${PWD}/artifacts:/artifacts lamw/ghettovcb sh << COMMANDS
cp vghetto-ghettoVCB* /artifacts
COMMANDS

Finally, to copy the ghettoVCB VIB from the Docker Host to your desktop, we first need to identify the IP Address given to our Docker Machine by running the following command:

docker-machine ip osxdock

Currently, Docker Machine does not include a simple "scp" command so we will need to use regular scp command and specify the private SSH keys which you can find by running "docker-machine inspect [NAME-OF-DOCKER-HOST]" and connecting to our Docker Host to copy the ghettoVCB VIB by running the following command:

scp -i /Users/lamw/.docker/machine/machines/osxdock/id_rsa [email protected]:artifacts/vghetto-ghettoVCB.vib .

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Docker, ESXi, Fusion Tagged With: container, Docker, docker-machine, esxi, ghettoVCB, ghettovcb-restore, vib, vib author

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy