• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

notsupported

VSAN 6.2 (vSphere 6.0 Update 2) homelab on 6th Gen Intel NUC

03/03/2016 by William Lam 33 Comments

As many of you know, I have been happily using an Apple Mac Mini for my personal vSphere home lab for the past few years now. I absolutely love the simplicity and the versatility of the platform to easily run a basic vSphere lab to being able to consume advanced capabilities of the vSphere platform like VMware VSAN or NSX for example. The Mac Mini's also supports more complex networking configurations by allowing you to add an additional network adapter which leverages the built-in Thunderbolt adapter which many other similar form factors lack. Having said that all that, one major limitation with the Mac Mini platform has always been the limited amount of memory it can support which is a maximum of 16GB (same limitation as other form factors in this space). Although it is definitely possible to run a vSphere lab with only 16GB of memory, it does limit you some what on what you can deploy which is challenging if you want to explore other solutions like VSAN, NSX and vRealize.

I was really hoping that Apple would have released an update to their Mac Mini platform last year that would include support for 32GB of memory, but instead it was a very minor update and was mostly a let down which you can read more about here. Earlier this year, I found out from fellow blogger Florian Grehl that Intel has just released their 6th generation of the Intel NUC which officially adds support for 32GB of memory. I have been keeping an eye on the Intel NUC for some time now but due to the same memory limitation as the Mac Mini, I had never considered it as viable option, especially given that I own a Mac Mini already. With the added support for 32GB of memory and the ability to house two disk drives (M.2 and 2.5"), this was the update I was finally waiting for to pull the trigger and refresh my home lab given 16GB was just not cutting it for the work I was doing anymore.

There have been quite a bit of interest in what I ended up purchasing for running VSAN 6.2 (vSphere 6.0 Update 2) which has not GA'ed ... yet and so I figure I would together a post with all the details in case others were looking to build a similar lab. This article is broken down into the following sections:

  • Bill of Materials (BOM)
  • Installation
  • VSAN Configuration
  • Final Word

Disclaimer: The Intel NUC is not on VMware's official Hardware Compatibility List (HCL) and there for is not officially supported by VMware. Please use this platform at your own risk.

Bill of Materials (BOM)

vsan62-intel-nuc-bom
Below are the components with links that I used for my configuration which is partially based on budget as well as recommendations from others who have a similar setup. If you think you will need more CPU horsepower, you can look at the Core i5 (NUC6i5SYH) model which is slightly more expensive than the i3. I opted for an all-flash configuration because I not only wanted the performance but I also wanted to take advantage of the much anticipated Deduplication and Compression feature in VSAN 6.2 which is only supported with an all-flash VSAN setup. I also did not have a need for large amount of storage capacity, but you could also pay a tiny bit more for the exact same drive giving you a full 1TB if needed. If you do not care for an all-flash setup, you can definitely look at spinning rust which can give you several TB's of storage at a very reasonable cost. The overall cost of the system for me was ~$700 USD (before taxes) and that was because some of the components were slightly discounted through the use of a preferred retailer that my employer provided. I would highly recommend you check with your employer to see if you have similiar HR benefits as that can help with the cost if that is important to you. The SSDs actually ended up being cheaper on Amazon and so I ended up purchasing them there. 

  • 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
  • 2 x Crucial 16GB DDR4
  • 1 x Samsung 850 EVO 250GB M.2 for “Caching” Tier (Thanks to my readers, decided to upgrade to 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier)
  • 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier

Installation

vsan62-intel-nuc-1
The installation of the memory and the SSDs on NUC was super simple. You just need a regular phillips screwdriver and there were four screws at the bottom of the NUC that you will need to unscrew. Once loosen, you just need to flip the NUC unit back on top while holding the bottom and slowly taking the top off. The M.2 SSD requires a smaller phillips screwdriver which you will need to unscrew before you can plug in the device. The memory just plugs right in and you should hear a click to confirm its inserted all the way. The 2.5" SSD just plugs into the drive bay which is attached to the top of the NUC casing. If you are interested in more details, you can find various unboxing and installation videos online like this one. 

UPDATE (05/25/16): Intel has just released BIOS v44 which fully enables unleashes the power of your NVMe devices. One thing to note from the article is that you do NOT need to unplug the security device, you can just update BIOS by simply download the BIOS file and loading it onto a USB key (FAT32).

UPDATE (03/06/16): Intel has just released BIOS v36 which resolves the M.2 SSD issue. If you have updated using earlier versions, to resolve the problem you just need to go into the BIOS and re-enable the M.2 device as mentioned in this blog here.

One very important thing to note which I was warned about by a fellow user was NOT to update/flash to a newer version of the BIOS. It turns out that if you do, the M.2 SSD will fail to be detected by the system which sounds like a serious bug if you ask me. The stock BIOS version that came with my Intel NUC is SYSKLi35.86A.0024.2015.1027.2142 in case anyone is interested. I am not sure if you can flash back the original version but another user just informed me that they had accidentally updated the BIOS and now he can no longer see the M.2 device 🙁

For the ESXi installation, I just used a regular USB key that I had lying around and used the unetbootin tool to create a bootable USB key. I am using the upcoming ESXi 6.0 Update 2 (which has not been released ... yet) and you will be able to use the out of the box ISO that is shipped from VMware. There are no additional custom drivers that are required. Once the ESXi installation loads up, you can then install ESXi back onto the same ESXi USB key which it initially boot it up. I know this is not always common knowledge and as some may think you need an additional USB device to install ESXi. Ensure you do not install anything on the two SSDs if you plan to use VSAN as it requires at least (2 x SSD) or (1 x SSD and 1 x MD).

vsan62-intel-nuc-3
If you are interested in adding a bit of personalization to your Intel NUC setup and replace the default Intel BIOS splash screen like I have, take a look at this article here for more details.

custom-vsan-bios-splash-screen-for-intel-nuc-0
If you are interested in adding additional network adapters to your Intel NUC via USB Ethernet Adapter, have a look at this article here.

VSAN Configuration

Bootstrapping VSAN Datastore:

  • If you plan to run VSAN on the NUC and you do not have additional external storage to deploy and setup things like vCenter Server, you have the option to "bootstrap" VSAN using a single ESXi node to start with which I have written in more detail here and here. This option allows you to setup VSAN so that you can deploy vCenter Server and then help you configure the remainder nodes of your VSAN cluster which will require at least 3 nodes unless you plan on doing a 2-Node VSAN Cluster with the VSAN Witness Appliance. For more detailed instructions on bootstrapping an all-flash VSAN datastore, please take a look at my blog article here.
  • If you plan to *ONLY* run a single VSAN Node which is possible but NOT recommended given you need a minimum of 3 nodes for VSAN to properly function. After the vCenter Server is deployed, you will need to update the default VSAN VM Storage Policy to ether allow "Forced Provisioning" or changing the FTT from 1 to 0 (e.g. no protection given you only have a single node). This will be required else you will run into provisioning issues as VSAN will prevent you from deploying VMs as it is expecting two additional VSAN nodes. When logged into the home page of the vSphere Web Client, click on "VM Storage Policies" icon and edit the "Virtual SAN Default Storage Policy" and change the following values as show in the screenshot below:

Screen Shot 2016-03-03 at 6.08.16 AM

Installing vCenter Server:

  • If you are new to deploying the vCenter Server, VMware has a deployment guide which you can follow here.

Optimizations:

  • In addition, because this is for a home lab, my buddy Cormac Hogan has a great tip on disabling device monitoring as the SSD devices may not be on the VMware's official HCL and can potentially negatively impact your lab environment. The following ESXCLI command needs to be run once on each of the ESXi hosts in the ESXi Shell or remotely:

esxcli system settings advanced set -o /LSOM/VSANDeviceMonitoring -i 0

  • I also recently learned from reading Cormac's blog that there is also new ESXi Advanced Setting in VSAN 6.2 which allows VSAN to provision a VM swap object as "thin" versus "thick" which has been the historically default. To disable the use of "thick" provisioning, you will need to run the following ESXCLI command on each ESXi host:

esxcli system settings advanced set -o /VSAN/SwapThickProvisionDisabled -i 1

  • Lastly, if you plan to run Nested ESXi VMs on top of your physical VSAN Cluster, be sure to add this configuration change outlined in this article here, else you may see some strangeness when trying to create VMFS volumes.

vsan62-intel-nuc-2

Final Word

I have only had the NUC for a couple of days but so far I have been pretty impressed with the ease of setup and the super tiny form factor. I thought the Mac Mini's were small and portable, but the NUC really blows it out of the water. I was super happy with the decision to go with an all-flash setup, the deployment of the VCSA was super fast as you would expect. If I compare this to my Mac Mini which had spinning rust, for a portion of the VCSA deployment, the fan would go a bit psycho and you can feel the heat if you put your face close to it. I could barely feel any heat from the NUC and it was dead silent which is great as it sits in our living room. Like the Mac Mini, the NUC has regular HDMI port which is great as I can connect it directly to our TV and has plenty of USB ports which could come in handy if you wanted to play with VSAN using USB-based disks 😉

vsan62-intel-nuc-4
One neat idea that Duncan Epping had brought up in a recent chat with him was to run a 2-Node VSAN Cluster and have the VSAN Witness appliance running on a desktop or laptop. This would make for a very simple and affordable VSAN home lab without requiring a 3rd physical ESXi node. I had also thought about doing the same but instead of 2 NUCs, I would be combining my Mac Mini and NUC to form the 2-Node VSAN Cluster and then run the VSAN Witness on my iMac desktop which has 16GB of memory. This is just another slick way you can leverage this new and powerful platform to run a full blow VSAN setup. For those of you following my blog, I am also looking to see if there is a way to add a secondary network adapter to the NUC by the way of a USB 3.0 based ethernet adapter. I have already shown that it is definitely possible with older releases of ESXi and if this works, could make the NUC even more viable.

Lastly, for those looking for a more beefier setup, there are rumors that Intel maybe close to releasing another update to the Intel NUC platform code named "Skull Canyon" which could include a Quad-Core i7 (Broadwell based) along with supporting the new USB-c interface which would be able to run Thunderbolt 3. If true, this could be another option for those looking for a bit more power for their home lab.

A few folks had been asking what I plan to do with my Mac Mini now that I have my NUC. I probably will be selling it, it is still a great platform and has Core i7 which definitely helps with any CPU intensive tasks. It also supports two drives, so it is quite inexpensive to purchase another SSD as it already comes with one to setup an all-flash VSAN 6.2 setup. Below are the the specs and If you interested in the setup, feel free to drop me an email at info.virtuallyghetto [at] gmail [dot] com.

  • Mac Mini 5,3 (Late 2011)
  • Quad-Core i7 (262635QM)
  • 16GB memory
  • 1 x SSD (120GB) Corsair Force GT
  • 1 x MD (750 GB) Seagate Momentus XT
  • 1 x built-in 1Gbe Ethernet port
  • 1 x Thunderbolt port
  • 4 x USB ports
  • 1 x HDMI
  • Original packaging available
  • VSAN capable
  • ESXi will install OOTB w/o any issues

Additional Useful Resources:

  • http://www.virten.net/2016/01/vmware-homeserver-esxi-on-6th-gen-intel-nuc/
  • http://www.ivobeerens.nl/2016/02/24/intel-nuc-6th-generation-as-home-server/
  • http://www.sindalschmidt.me/how-to-run-vmware-esxi-on-intel-nuc-part-1-installation/
Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tagged With: esxi 6.0, homelab, Intel NUC, notsupported, Virtual SAN, VSAN, VSAN 6.2, vSphere 6.0 Update 2

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

11/18/2015 by William Lam 25 Comments

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:

migrating-from-vss-to-vds-with-single-nic-1
Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

migrating-from-vss-to-vds-with-single-nic-0
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

migrating-from-vss-to-vds-with-single-nic-2
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

migrating-from-vss-to-vds-with-single-nic-3
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Not Supported, vSphere Tagged With: distributed portgroup, distributed virtual switch, dvs, esxi, notsupported, vds

ESXi 6.0 on Apple Xserve 2,1

08/20/2015 by William Lam 7 Comments

I really enjoy hearing from my readers, especially when they share some of the unique challenges they have come across or boundaries they have pushed with our VMware software. Several weeks back I received an interesting email from a reader named John Clendenen who was able to get ESXi 6.0 running on both his Apple Xserve 3,1 as well as Xserve 2,1! To be honest, I was pretty surprised to hear that this worked and not only that, there was not a whole lot that John had to do to get it working. I thought this was a pretty cool setup and asked if John was interested in sharing more details with the VMware/Mac OS X community in case others were interested.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

*** This is a guest blog post from John Clendenen ***

For the past 5 years, I have lived in New York where I work at various print and post-production studios. The IT situations in many of these locations is often home-grown and sub-adequate, if not barely functional, so I took it upon myself to learn OS X Server administration. I have a background in computer science and networking, so it wasn’t a huge leap to step into administration. I was able to gradually build one studio’s IT infrastructure form a single AFP share with incessant permissions problems, to an Open Directory driven, single sign-on infrastructure with mobile homes, messaging etc.. However, with each OS X Server update, things broke, and I was spending a lot of time putting out fires.

This led me to pursue virtualization in hopes of isolating services on separate machines, without telling the studio to go buy 8 Apple Mac Minis. However, I needed to learn how to do this, before pitching it to them, so I looked on Craigslist for some cheap hardware. I found an Xserve 2,1, which I was able to talk down to $100. I also found a brief post in a thread that said that the Xserve 2,1 ran ESXi 5.5 without issue. I figured I’d take the plunge and just resell it on eBay if it didn’t work out.

Since then, my home lab has grown considerably, and I have learned that the best way to provide Mac services, is simply to use Linux (I’ve had a great experience with Netatalk). That said, Open Directory, Software Update, Caching and a few other services still need to be run on a Mac, so it’s still necessary to have the hardware. For a while, I had 3 Xserves, all purchased very cheaply, running VMs. I just sold one, and will sell another in the next month or two in favor of some Supermicro hardware (I really am mostly running Linux VM’s at this point). I’ll keep the one to run a few Mac OS X VMs. I’m still working on getting Samba 4 to work as the PDC, but once that is running smoothly, I’ll have a functional environment that I can take to the studios where I work (with approved hardware of course). Then I’ll have an improved work experience, while also pulling some extra income on the installation/maintenance.

Anyway, you’ve come here to read about running ESXi 6.0 on an Xserve 2,1. Well, it works. It’s 7 years old, and not officially supported, but you already know that. So, if you were considering it, I doubt anything here will dissuade you. On top of that, there’s not much for me to say other than, it works. No tricks to it, just install and you’re off.

pic1
That said, I do have some recommendations and tips to help you if you want to get the most out of this hardware. 2 months ago, since I swapped out the RAID card for the standard backplane/interconnect and upgraded the RAM. It hasn’t skipped a beat since.

My System

This is my home custom 13u rack with sound insulation and active air flow. Networking is in the back (Ubiquiti). It sits in a lofted storage area with an A/C and vents into the bathroom.

pic8
Here you see an Xserve 3,1 on top of an Xserve 2,1. There’s a blank unit because I just sold an Xserve 2,1 on eBay, and the other 2,1 will be for sale soon as well to make room for a 4 node 2U Supermicro Server. The NAS is comprised of a 4u head unit and a 4U JBOD running Centos. Last of course, is a 2U CyberPower UPS which is really just to condition the power and keep brownouts from taking down the system.

I have about a dozen VM’s running between the 2 Xserves. I have each Mac OS X service separated on it’s own installation. This way I can run updates per service. It’s especially nice to have Open Directory separate from less important services. I also have Debian, Centos and OpenBSD VMs running various services. Getting the Xserve 3,1 running ESXi 6 is possible, but more problematic. Now that I have it working, I’m dropping the 2,1’s simply because the processors aren’t multithreaded. I am currently working on a companion article to this one, detailing my experience with the Xserve 3,1, so that information will be available soon.

Storage

1. Don’t use the RAID backplane/interconnect or count on using it. ESXi 6 does not recognize it, and RDM appears to work at first and then will crash your VM and never show up again. You can have it installed in the Xserve without any issue, but you’ll get a lot more mileage out of the hardware if you have the standard backplane/interconnect.

The backplane you want appears periodically on eBay. The part number is: 661-4648

2. If/once you have the non-RAID backplane/interconnect, keep in mind that it is SATAII and will only support 3Gb/s. I am using 3 old 500Gb WD RE3’s, but I’d recommend using some older SSDs that will max out the SATAII interface without paying for speed you can’t use. Be sure you consult the Apple Drive Module compatibility page to make sure you have the right drive caddies. They all fit, but they don’t all work.

Apple Drive Module Compatibility Chart: https://support.apple.com/en-us/HT1219

pic2
3. PCIE Flash is a good idea, whether you use it for cache, VM’s or as the ESXi boot disk, it is by far the fastest storage option. I have not invested in it, but the good people at Mushkin have told me their Scorpion PCIE flash will work in ESXi 6. Please contact them yourself though before investing in one of their cards. I have not tested them. While this will give you the best performance out of your Xserve 2,1, it seems like overkill to me, but hey, if you want to push this thing, you might find some meaningful performance gains here.

Mushkin Scorpion PCIE Flash: http://www.poweredbymushkin.com/index.php/products/solid-state-drives.html

4. It might occur to you that replacing the optical drive with an ssd might be a good idea. While MCE Technologies makes “Optibay” for Xserve 2,1, the connection is IDE, so this is not recommended. I also don’t know if ESXi would recognize it. My gut says probably, but again, it’s too slow to be useful. It isn’t that cheap either.

MCE Technologies Optibay: http://store.mcetech.com/mm/merchant.mvc?Store_code=MTOS&Screen=PROD&Product_Code=OBSXGB-XS

Network

There are a lot of options here. You can plug any network card that you want really as long as you can at least find an anecdotal account of it working on ESXi 6.

I have one such anecdote for you, and it is also a strong suggestion. The company Silicom makes/made several models of gigabit NICs which are all incredibly inexpensive, are all over eBay and work in ESXi 6. Buy the 6 port model. It’s cheap and you’ll get great scaling with ESXi load balancing across 8 gigabit ports.

The Xserve 2,1 has the added advantage here of an optional PCIX riser. If your model has one, or you find one for cheap, you can save even more on the Silicom NIC. The PCIE models go for $60-$80 on eBay, while the PCIX models go for $40.

pic3

Memory

ECC DDR2 is pretty cheap and easy to find used. I recommend memory4less though. I had to return some RAM that was mislabeled from a random eBay distributor. Memory4less will get it right. http://www.memory4less.com/

pic4

Processors

One great perk of the Xserve 2,1, is that you can upgrade the single processor hardware to dual processor. You can pick up an additional processor on eBay for $40 or so, but you’ll need to get a heat sink as well. The single processor units come with a fake aluminum heat sink, but do not use it. You want a copper one. I believe the heat sinks in the Xserve 1,1 are the same. Don’t forget the thermal paste.

pic5

Minor Issues

1. The Performance tab throws an error.

pic6
2. Not sure about the hardware sensors, but it looks like not everything is working even if it’s showing up. I did not do any testing here.

pic7
Stay tune for Part II of John's guest blog post on running ESXi 6.0 on Xserve 3,1.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, Not Supported, vSphere 6.0 Tagged With: apple, esxi 6.0, notsupported, osx, xserve

VMworld Barcelona #NotSupported Tips/Tricks for vSphere 5.5 Slides Posted

10/18/2013 by William Lam 3 Comments

I hope everyone got back home quickly and safely from VMworld Barcelona (unlike myself for those of you who follow me on Twitter, took a bit longer than I had hoped). For those of you who attended my #NotSupported Tips/Tricks for vSphere 5.5 at VMworld Barcelona earlier in the week, I have posted my Sliderocket presentation below as well as the recording thanks to the vBrown Bag team. If you could not attend the session or did not go to VMworld you can view all vBrown Bag Tech Talk recordings by checking out this link here. I highly recommend you check out the ENTIRE presentation as there was an exciting announcement that I made at the VERY END of the presentation.

Disclaimer: I think it should be pretty obvious, but things discussed in the presentation is not officially supported by VMware. Use at your own risk.

I would like to thank everyone that attended my session, really enjoyed the crowd and the questions/discussions afterwards. I know you could have been else where such as the Solution Exchange with a nice beverage, so thank you all for attending and hopefully everyone enjoyed it. I would also like to give a big shout out to vBrown Bag team Jon Harris, Kyle Murray, Damian Karlson and Gregg Robertson for putting together such an awesome event up for the VMware community and for their assistance on quickly getting the AV all setup for my presentation.

Here is the presentation:

Here is the video recording:

Here is a couple of pictures from the audience:

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: nested, nested virtualization, notsupported, vmware tools, vmworld

Presenting NotSupported vBrownBag Tech Talk at VMworld Barcelona 2013

10/07/2013 by William Lam Leave a Comment

VMworld Barcelona 2013 is only one week away and if you are going to be attending, I would highly recommend you to check out the awesome vBrownBag Tech Talks which have been running for the second year in a row now at both VMworld US and Europe. There are a variety of topics that are covered from deep dives into new technologies to building home labs for your VMware environment, there is literally something for everyone. The structure of the talks are in the form of quick lightening talks usually taking anywhere from 20-30minutes. If you get a chance, you should definitely drop by the community lounge and check out a few of talks.

Last year I really taking part of the NotSupported themed sessions (concept by Randy Keener) and presented on vInception (Nested Virtualization) at VMworld US and vGhetto Lab at VMworld Europe. I thought both sessions were very well attended and audience seemed to have enjoyed the content. Due to a hectic schedule for VMworld US, I was not able to submit a session but for VMworld Barcelona, I will be presenting on another NotSupported session called NotSupported Tips/Tricks for vSphere 5.5 on Tuesday Oct. 15th at 16:30-17:00.

If you have attended one of my previous sessions, you know it will be technical and hopefully you will leave away with knowing even more NotSupported goodies that you can use in your vSphere environment. For those of you who attend and stay until the end, I will have a vBrownBag exclusive in which I will be showing off some super cool vAwesomness that you will not want to miss! I hope to see you all there.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: esxi 5.5, notsupported, vmworld, vSphere 5.5

New vCenter Server Simulator 2.0 enhancements in VCSA 5.5

09/04/2013 by William Lam 46 Comments

Last year I wrote about a very interesting tool called vCenter Server Simulator (VCSIM) which allows a user to quickly simulate a VMware environment that can be comprised of thousands of ESXi hosts and virtual machines. VCSIM can benefit a variety of use cases such as learning about the vSphere API, creating reports for vSphere or vCloud Director to building vSphere Web Client plugins to help visualize large inventories. There was an overwhelming interest in VCSIM from last year and I received some great feedback and feature requests which I fed back to the VMware engineers who developed this internal tool.

With the upcoming version of vSphere 5.5 to be released very soon, I was wondering if there were going to be any new features for VCSIM in VCSA 5.5? I reached out to one of the engineers, Haiping Yang, who works in the Performance Engineering team who is currently taking over some of the development of VCSIM. Some of you might be familiar with some of her work such as the recent visualEsxtop, esxtop and resxtop to just name a few. In talking to Haiping, I found that she has been quite busy adding cool new features to VCSIM and this is on top of her regular day job!

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here is a quick summary of the new features of VCSIM 2.0:

Distributed Virtual Switch (VDS) Support:

  • Add / Remove ESXi hosts from VDS
  • Create / Delete Distributed Virtual Portgroup
  • Reconfigure Distributed Virtual Portgroup
    • Add / Remove VM from Distributed Portgroup

vCloud Networking & Security (vCNS) Support:

  • Create / Delete vCNS Gateway
  • Create / Delete Isolated/Routed Org Networks
  • Create / Delete vApp Networks
  • Deploy / Undeploy vApp with DHCP service enabled

Persistent Inventory Configuration upon restart:

  • Folder, Cluster, Resource Pool, Host, Datastore, Virtual Machine, Network and VDS

Custom Configuration Support:

  • ESXi version template
  • ESXi configuration template
  • Datastore configuration
  • Virtual Machine datastore

Easy startup commands:

  • vmware-vcsim-start
  • vmware-vcsim-stop [true|false] - Determines whether the inventory is cleared after stopping VCSIM

Note: Before you can use VCSIM, you will need to configure the VCSA as you normally would by going through the VAMI interface or running through the SSH commands noted in this article.

I will not go over every single feature mentioned above, but I did want to take a look at a few noteworthy features such as the new VCSIM start/stop command, datastore configuration and ESXi host configuration templates.

VCSIM Start/Stop Commands:

With the previous version of VCSIM, you had to manually edit the vCenter Server configuration file (vpxd.conf) and append the necessary VCSIM configurations. In this release, we now have an easy to use command-line utility to start and stop VCSIM. The vmware-vcsim-start command supports several startup options.

To view the list of supported options, just run the following command:

vmware-vcsim-start help

Option 1 - You can specify a VCSIM configuration file and you can find several examples located in /etc/vmware-vpx/vcsim/model

Option 2 - You can specify either the keyword "empty" for a blank vSphere inventory or "default" which will automatically use /etc/vmware-vpx/vcsim/model/vcsim-default.cfg inventory configuration

Option 3 - You can just specify the inventory layout on the command-line. An example would be "custom:dc=1,cluster=1,rp=1,host=1,vm=1,vm_on=1,latency=true"

To get a list of all the available VCSIM configurations, take a look at /etc/vmware-vpx/vcsim/model/vcsim.cfg.template

Here is an example of starting VCSIM using the "default" mode:

vmware-vcsim-start default

 

Datastore Configuration:

Custom datastore configuration was something that was much sought after with VCSIM 1.0 and unfortunately, there was only a single global datastore that was automatically "connected" to all simulated ESXi host. The new version of VCSIM now supports custom datastore configurations that can be defined globally, at the cluster level, local storage as well as string prefix which can help you separate out different VCSIM instances.

Here is an example of the configuration that would need to be added to the VCSIM configuration file:

1
2
3
4
5
6
<datastore>
   <global>1</global>
   <cluster>4</cluster>
   <local>5</local>
   <prefix>vghetto</prefix>
</datastore>

Here is what one of the simulated ESXi hosts would show for its datastores:

 

ESXi Configuration Template:

Another useful feature that I personally have asked for is the ability to customize an individual simulated ESXi host. Though this is still currently a work in progress, what you can do with VCSIM 2.0 is to customize the ESXi host version as well as the datastores on a per host basis. If you take a look vcsim.cfg.template, you will find a configuration line that looks like:

vcsim/model/hostConfig

This specifies a directory that would contain custom simulated ESXi host templates and their configurations. A sample host template is provided at /etc/vmware-vpx/vcsim/model/hostConfig.xml.template and currently, you need to specify the default simulated hostname (e.g. DC0_C0_H0.xml).

Here is an example of what that host template can look like:

1
2
3
4
5
6
7
<hostConfig>
  <datastores>
     <ds id="virtuallyGhetto-datastore-1"/>
     <ds id="virtuallyGhetto-datastore-2"/>
     <ds id="virtuallyGhetto-datastore-3"/>
  </datastores>
</hostConfig>

Now if we go back to our DC0_C0_H0 ESXi host, you will see that the host template will override the global configuration:

For the two examples above, here is what I used in my custom VCSIM configuration file that I called vcsim-virtuallyghetto.cfg if you are interested in what I used:

1
2
3
4
5
6
7
8
9
10
11
<simulator>
  <enabled>true</enabled>
  <initInventory>vcsim/model/initInventory-default.cfg</initInventory>
  <hostConfigLocation>vcsim/model/hostConfig</hostConfigLocation>
  <datastore>
     <global>1</global>
     <cluster>4</cluster>
     <local>5</local>
     <prefix>vghetto</prefix>
  </datastore>
</simulator>

I have already asked for the ability to fully customize the simulated ESXi host display name and have already been told that this is something they would consider for a future release. VCSIM 2.0 has also been improved to better operate with vCloud Networking & Security and vCloud Director. I was able to quickly test VCSIM 2.0 with the latest version of vCloud Director 5.5 and everything seems to be working fine. You can follow the existing instructions here for vCloud Director setup with VCSIM.

As you can see VCSIM 2.0 contains many new features and I highly encourage you to give it a spin when vSphere 5.5 is made generally available. There are definitely some additional fit and finish features that Haiping just could not get into this release. Hopefully we will get those updates in a future release of VCSIM and include additional ESXi template versions. If you have any feedback, comments or feature requests feel free to leave a comment and I will make sure it reaches Haiping and the development team. I do not want to spoil the surprise, but I just want to say one of the features coming in VCSIM 3.0 will be quite AWESOME! 😀 (sorry for the tease)

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VCSA, vSphere 5.5 Tagged With: notsupported, simulator, vcenter, vcsa, vcsim, vcva, vSphere 5.5

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy