Community stories of VMware & Apple OS X in Production: Part 10

Company: Fitstar
Software: VMware vSphere
Hardware: Apple Mac Mini

[William] - Hi Clay, thanks for taking some time out of your schedule this afternoon to talk with us regarding one of the projects are you are currently working on. Before we get started, can you quickly introduce yourself and your current role within VMware?

[Clay] - Good morning and thanks for having me! My name is Clay Alvord, and I am a Senior Prototype Engineer, here at VMware. I work with hardware vendors as they develop new equipment, and get it in the hands of the developers here. It allows our engineers to get early access to pre-release gear, in the hope that as the equipment comes to market, it's on our HCL at the same time. It also allows us to help debug the hardware as its developed, so we don't hit any critical surprises after release.

[William] - Thanks Clay, very cool role! So, I hear you have been working closely with a new startup who has built a really interesting design involving VMware & Mac Mini’s? Could you provide us some more details around the design and the type of application/workload the customer has planned for this infrastructure?

[Clay] - Thats exactly right. FitStar deals with a lot of high resolution video, so their storage requirements are above average for a company their size. Most of their servers live in Amazon's EC2 cloud, and so they are already heavy users of Amazon's services. Amazon has a product called Amazon Storage Gateway (ASG). ASG allows for local storage to be mirrored to EC2, or have the your most commonly access EC2 files cached locally.

What I have designed is a local storage array, with a an Apple Mac Mini running ESXi 5.5 and Amazon's Gateway (local) storage. This gives the users the speed of local storage, with the safety-net of having their data in EC2 at the same time.

[William] - How many Mac Mini’s are they currently running on-premises and what hardware configuration did the customer choose for their specific application requirements? Were there any constraints that you had faced due to the limited resources the Mac Mini’s provided?

[Clay] - They have 1 Mac Mini, and 1 Dell Poweredge. The Mac was a hard requirement, because the original design required us to run OSX server.

We opted for the Mac Mini as it fit the budget better, when compared to a Mac Pro. The Mac Mini is a Late 2012 and has a 3Ghz cpu, with 16GB of ram. Our biggest constraint is the memory in the system. We run 2 storage gateway VM's on the dell. Each one requiring 8GB of memory. We could not have it all on the Mac Mini as the Mini only supports 16GB in total and does not have room for future growth.

The Mac Mini has 3 Mac OS X VMs. 2 of them are OS X 10.10, each running OS X Server. One for dedicated Xcode buildbot, and app caching. The other for Time Machine services. The 3rd VM is running Mac OS X 10.9 Server and is purely for file sharing.

Here is a picture of Fitstar's setup:

Here are some additional physical and logical diagrams of the setup:

fitstart-diagram2 fitstart-diagram3
[William] - How much storage is currently being managed today and how is that presented to the VMs? Do they have plans on increasing either the storage or compute platforms as they grow?

[Clay] - The storage array has 2 RAID-6 luns, serving a total of 20TB to the Dell host over iSCSI. The host then breaks up the storage into 1TB disks that are then attached the two ASG VMs. The VMs, mirror the data to Amazon and then present new iSCSI targets to the Mac Mini host. From there we use Raw Device Mappings to attach the file server and backup server.

[William] - This looks like a really cool solution that you’ve architected with the customer. For a startup, I was kind of surprised to hear they went with vSphere versus going down an open source route and potentially using some type of Cloud Services? Do you know what the motivation was that lead the customer to choose vSphere and running an on-premise solution?

[Clay] - The motivation of going ESXi over an alternative solution had several factors. The first was Fitstar's familiarity with VMware, as well as my own. The second was this solution is the backbone of their company and they needed a world class solution that has not only a strong support system, but a HUGE community behind it. Lastly, it was the hard requirement to use ASG. Using ASG allows for the volumes to be directly mounted in a EC2 instance in case of an emergency. Amazon also states that the ASG vm's are optimized for ESX and Hyper V.

[William] - That is great to hear that even for startups, having an enterprise and highly available platform such as vSphere is critical to their business. Were there any challenges while designing and deploying this infrastructure, either from a deployment or operational point of view?

[Clay] - Definitely. This project was originally designed with just file services in mind. The original POC was a local storage array, and the Mac Mini. The Mini would run a ASG and 1 OS X VM.

When it was decided that we needed Xcode, Caching and Time Machine services, we opted for a dedicated VM for each of theses. The reason is that if there were issues or heavy load with any of them, it would not affect the others.

Some of the other challenges we had was getting iSCSI to play well with Mac OS X. We were planning on having the iSCSI connections go directly to the VMs, and bypass ESXi, but 3rd party drivers don't work with Amazon's version of iSCSI. As a result, we now connect to the hypervisor, and use raw mappings to the VMs. We opted for raw mappings so that if we mount a volume in EC2, it sees a HFS+ disk, not a VMFS one with HFS inside.

We also had trouble getting the OS X server services to work on virtualized hardware. ultimately we adjusted the vm parameters to expose the hardware ID's to the vm, and so OS X thinks it running on physical hardware.

We are still working on plenty of tweaks to the system. I have seen a  OS X panic, and kernel logs point at VMware Tools as the culprit. We have filed a bug for this. We also have an issue that the nics in the Mac Mini are e1000, not e1000e. This occasionally leads to a PSOD. The work around we plan on introducing is Thunderbolt to ethernet adapters.

The last ESXi related hurdle is that in order for the VMs on the Mac Mini to auto start, the Dell and AGS VMs must be online, and the Mini has to have already scanned its storage adapters. So in the event of a power outage, when everything powers up, you must rescan storage on the mini, after the Dell is online, then power up the Mini's VM's. We have installed a battery backup unit, and are in the middle of automating the scan and power up of the Mini's VMs.

[William] - Clay, thank you very much for taking the time and sharing with us some of the innovative things our customers are doing with Apple and our vSphere platform. I really enjoy hearing about how our customers push our software to its limits and find new use cases that we had never thought about. Thanks again for sharing. Finally, before I let you go, do you have any words of advice or tips for other customers having similar requirements, especially those coming from a Startup? Any particular resources you recommend them checking out before getting started?

[Clay] - It was my pleasure. virtuallyGhetto has been a great resource for me in standing up the project. I have some tips and tricks related to this and some other things on my site as well.​

If you are interested in sharing your story with the community (can be completely anonymous) on how you use VMware and Mac OS X in Production, you can reach out to me here.

Automating Deployment & Configuration of vRealize Operations Manager 6.0 Part 3

In Part 2 of this series, I showed how you could automate the initial vROps Configuration using a couple of python command-line scripts found within the vROps Virtual Appliance which requires shell access. However, if you do not wish to configure vROps using the UI or command-line and having to enable SSH for remote execution, you do have the ability to configure this remotely using the new vROps CaSA REST API (Cluster and Slice Administration). Though this new API interface exists, because it is still being developed and refined, it is currently not publicly documented.

Disclaimer: The CaSA API is currently a private API and is currently not documented publicly, please use at your own risk as the interfaces and parameters can change in future releases.

To exercise the CaSA API and demonstrate the initial vROps configuration, I have created a simple shell script called which uses cURL to perform the REST requests. The only prerequisite to running the script is just a stocked vROps instance deployed running on the network having nothing configured. Within the script, there are 6 variables that will need to be edited based on your environment such as things like vROps IP Address, the credentials you wish to set and the Cluster Name and Slice Name, etc.


For those of you who are interested, here are the 7 CaSA REST API being called and the order in which they are executed to properly configure a vROps instance running the Admin, UI and Data role.

  1. POST https://${VROPS_IP_ADDRESS}/casa/security/adminpassword/initial
  2. POST https://${VROPS_IP_ADDRESS}/casa/sysadmin/cluster/ntp
  3. POST https://${VROPS_IP_ADDRESS}/casa/deployment/slice/role
  4. PUT   https://${VROPS_IP_ADDRESS}/casa/deployment/cluster/info
  5. PUT   https://${VROPS_IP_ADDRESS}/casa/deployment/slice/${VROPS_IP_ADDRESS}
  6. POST https://${VROPS_IP_ADDRESS}/casa/deployment/cluster/initialization?async=true
  7. POST https://${VROPS_IP_ADDRESS}/casa/sysadmin/cluster/online_state?async=true

Here is an example execution of the script:

As the script executes each REST call, there is an HTTP response code that is displayed on the screen. You should be seeing 200/202 responses, if you see a response code of 404/500 which means that something has gone wrong. The only place that I have seen some issues is when NTP is being configured and the difference in time between that being finished and when the vROps Cluster Name is being configured as the Cluster operations are very time-sensitive. Right now, the script is sleeping for 5minutes to ensure NTP is properly configured but it may require some adjustment if you see a response code of 500. Ideally, each request would be made and a separate GET operation would then be used to validate the changes prior to moving to the next step.

The last thing I want to mention is that the script goes through its execution fairly quickly but even when the script has finished, it will still take a couple more minutes for vROps to finalize its initialization. This is important because if you open your browser to your vROps instance, you will most likely see the "Get Started" page instead of the login page. The reason the page does not automatically re-direct to https://vrops-ip/vcops-web-ent is because the initialization has not completed. You can either wait for the webpage to auto-redirect or manually visit the URL above and once it has finished, you will be allowed to login using the credentials you have set in the script as seen in the screenshot below.

Hopefully this has been a useful three part series on how to automate both the deployment and initial configurations for the new vRealize Operations Manager 6.0. Depending on interests from my readers and my free time, I may also look into automated multi-node/HA deployments of vROps. If you are interested in such a deployment, please leave a comment and what you might like to see. I will see what I can do :)

I would also like to give a big thanks to Geremy Gibson who works over in our Management BU for helping me out and answering questions regarding the CaSA API. I would not have been able to figure it out if it was not for his assistance.

Automating Deployment & Configuration of vRealize Operations Manager 6.0 Part 2

Continuing from Part 1 of this three part series, you should now have a fully deployed vRealize Operations Manager connected on the network. You should see the following "Get Started" page when connecting to vROps via a web browser.

In this article, I will demonstrate how you can perform the initial configuration of your new vROps instance which includes configuring a password for the "admin" account that will be used to access the UI interface afterwards. You will also have the opportunity to configure basic things like NTP settings as well as the role of your vROps instance. If this is your first deployment of vROps 6.0, you will need to create a new Cluster where other "Slices" or vROps Instances can join and contribute different functionality such as Admin, UI, Data, Data Collector and Replica roles. In this example, we will assume the installation will contain all roles within this single instance. In the future, you can easily expand and add other instances that provide specific roles and in a future post, I can show how that can be accomplished using the CLI/API if there is an interest.

To perform the initial configurations, I have created a simple shell script called which requires SSH connectivity to the vROps instance. Ensure that during your initial setup, you have either enabled SSH or have gone into the VM Console and enabled SSH access. The script will be using the following four commands found within the appliance:

  • /usr/lib/vmware-casa/bin/
  • /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin/
  • /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin/
  • /usr/lib/vmware-vcopssuite/utilities/sliceConfiguration/bin/

There is only one mandatory variable VROPS_ADMIN_PASSWORD that needs to be edited prior to running the script which specifies the password for the "admin" account. There is also a CONFIGURE_NTP & NTP_SERVERS variables that can be edited to configure NTP. By default, I have this disabled this because the system will need to validate the NTP Servers. If you do not have valid NTP Servers or be able to reach the ones specified in the script, then you may run into an error.

Once you have saved your changes, you can simply run the script using the following command (please replace the IP Address with the IP of your vROps instance):

ssh [email protected] <

Note: If you would like to see more verbose details for each of these steps, you can remove the redirect to /dev/null for reach of the commands and can be useful in case something is not running correctly.

If everything was successfully configured, you should now be able to open a browser to your vROps instance and you should see the following screen asking you to now login:

Please login with the username "admin" and the password that you had set within the script. Once you have successfully login, you should now see the following wizard which will take you through the final steps of setting up your new vROps instance. Unfortunately, these last couple of steps could not be automated and will require some manual interaction before you are ready to start using your new vRealize Operations Manager.


If you do not wish to enable SSH by default and prefer a more programmatic approach on performing the initial configurations, stay tune for Part 3 where I will show you how to use the new vRealize Operations Manager Cluster Mgmt API also known as the CaSA API to perform this exact same configuration.

Automate Deployment & Configuration of vRealize Operations Manager 6.0 Part 1

Yesterday was a huge day for VMware's Management BU which released several updates to their product offerings within their vRealize Suite 6.x including some new products like the new vRealize Code Stream mentioned during this years VMware Europe Conference. Prior to GA, I had already received several Automation questions regarding the upcoming vRealize Operations Manager 6.0 (vROps). Luckily, I had a couple of days to play around with the new release before it was made public and I must to say, I am quite impressed at how easy and intuitive it is to deploy and configure the new vRealize Operations Manager 6.0.

To make it even easier for customers to evaluate the new release, I wanted to take a look at how you can easily automate both the deployment and configuration of the new vRealize Operations Manager. I have broken the process down into three parts: deployment using ovftool which will include both a non-Windows as well as a Windows solutions for my PowerCLI buddies, initial configuration using the command-line via a shell script and finally the same identical initial configuration but using the new vRealize Operations Manager Cluster Mgmt API (also known as the CaSA API which stands for Cluster and Slice Administration).

As mentioned already, this first article will focus on deploying the new vRealize Operations Manager OVA using ovftool. Previously, the vCOps VA was deployed as a vApp that contained two Virtual Machines. The new architecture provides a more dynamic approach and a new capability has been brought into the application that allows you to easily scale out the various vROps "roles" such as the Admin, UI, Data, Data Collector and Replica. This greatly simplifies the initial deployment which is always a plus in my book!

Disclaimer: These scripts are provided for informational and educational purposes only. It should be thoroughly tested before attempting to use in a production environment.

I have created a simple shell script called: and there are several variables that need to be edited based on your environment including the path to the OVA. Please take a look at the script prior to executing.

To execute the script, you simply just run the following:


You will be prompted to confirm the configurations you have specified before the OVA is deployed. If everything was successfully deployed, you should see your new vROps VM power up. Next, open a browser to either the IP Address or hostname of your vROps VM and you should see the following landing page as shown in the screenshot below. At this point, you have completed the deployment of vROps 6.0. As for next steps, you can either manually proceed to configure your new vROps instance or stay tune for Part 2 where I will demonstrate how you can easily automate the initial vROps configurations.

Note: There is a hidden OVF property called guestinfo.cis.appliance.ssh.enabled that will allow SSH to be enabled upon deployment. To be able to configure this property, you must add an advanced ovftool option called --X:enableHiddenProperties which the shell script already takes care of. Unfortunately, for PowerCLI's Get-OvfConfiguration cmdlet, these custom options have not been implemented and hence you will not be able to turn on SSH when using the PowerCLI method. I have already filed an FR internally for this and hopefully see this in a future release of PowerCLI.

Here is a Windows solution to deploying the vRealize Operations Manager called Deployment.ps1 using PowerCLI's Get-OvfConfiguration cmdlet and I have contributed a new sample to Alan Renouf's PowerCLI Deployment Repository. Before running the Deployment.ps1 script, you will also need to edit the variables in the script to match your environment.

Here is a screenshot using the Deployment.ps1 script:

Now that you have your new vRealize Operations Manager deployed, you can manually go through the guided wizard for the initial configuration or stay tune for Part 2 where I will demonstrate you how you can easily automate the initial vROps configurations using the command-line.

Handy VCSA (vCenter Server Appliance) Operational KB Resources

I am a huge fan of the VCSA (vCenter Server Appliance) for anyone that knows me. From time to time, I see interesting VMware KB articles that contain what I think are valuable tidbits of "Operational" information that could come handy in the future. I normally would bookmark these in my browser since you never know when you might need it. I figured for customers who are currently using the VCSA, having some of these operational tidbits would definitely be helpful, especially during troubleshooting or helping them build out a list of resources they could reference when they need to update, increase capacity or change the configurations for the VCSA. Instead of just keeping this list for myself, I thought I can share what I have for the latest VCSA 5.5.x as well comb through our VMware KB site looking for other handy operational KB's to include.

I have categorized the VCSA KB's into four categories that I felt made the most sense, I am sure you could break it down further but I thought this would make it easier to process. In addition, I have also included articles from virtuallyGhetto (subset from this page) that may also apply to these areas which I have listed at the very bottom in case you were interested in those as well. Hopefully this will be helpful for anyone managing VCSA(s) and if there are any that I have missed or you would like to see get added, feel free to leave a comment.

  • Minimum Requirements for the VMware vCenter Server 5.x Appliance (2005086)
  • Downloading and deploying the vCenter Server Appliance 5.x (2007619)
Logging & Troubleshooting:
Backups & Recovery: 

virtuallyGhetto VCSA Operational Resources

Logging & Troubleshooting:
Backups & Recovery: