• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Home Lab

Home Labs made easier with VSAN 6.0 + USB Disks

03/04/2015 by William Lam 23 Comments

VSAN 6.0 includes a large number of new enhancements and capabilities that I am sure many of you are excited to try out in your lab. One of the challenges with running VSAN in a home lab environment (non-Nested ESXi) is trying to find a platform that is both functional and cost effective. Some of the most popular platforms that I have seen customers use for running VSAN in their home labs are the Intel NUC and the Apple Mac Mini. Putting aside the memory constraints in these platforms, the number of internal disk slots for a disk drive is usually limited to two. This would give you just enough to meet the minimal requirement for VSAN by having at least a single SSD and MD.

If you wanted to scale up and add additional drives for either capacity purposes or testing out a new configurations, you are pretty much out of luck, right? Well, not necessary. During the development of VSAN 6.0, I came across a cool little nugget from one of the VSAN Engineers where USB-based disks could be claimed by VSAN which could be quite helpful for testing in a lab environment, especially using the hardware platforms that I mentioned earlier.

For a VSAN home lab, using cheap consumer USB-based disks which you can purchase several TB's for less than a hundred dollars or so and along with USB 3.0 connectivity is a pretty cost effective way to enhance hardware platforms like the Apple Mac Mini and Intel NUCs.

Disclaimer: This is not officially supported by VMware and should not be used in Production or evaluation of VSAN, especially when it comes to performance or expected behavior as this is now how the product works. Please use supported hardware found on the VMware VSAN HCL for official testing or evaluations.

Below are the instructions on how to enable USB-based disks to be claimable by VSAN.

Step 1 - Disable the USB Arbitrator service so that USB devices can been seen by the ESXi host by running the following two commands in the ESXi Shell:

/etc/init.d/usbarbitrator stop
chkconfig usbarbitrator off

vsan-usb-disk-1
Step 2 - Enable the following ESXi Advanced Setting (/VSAN/AllowUsbDisks) to allow USB disks to be claimed by VSAN by running the following command in the ESXi Shell:

esxcli system settings advanced set -o /VSAN/AllowUsbDisks -i 1

vsan-usb-disk-2
Step 3 - Connect your USB-based disks to your ESXi host (this can actually be done prior) and you can verify that they are seen by running the following command in the ESXi Shell:

vdq -q

vsan-usb-disk-3
Step 4 - If you are bootstrapping vCenter Server onto the VSAN Datastore, then you can create a VSAN Cluster by running "esxcli vsan cluster new" and then contribute the storage by adding the SSD device and the respective USB-based disks using the information from the previous step in the ESXi Shell:

esxcli vsan storage add -s t10.ATA_____Corsair_Force_GT________________________12136500000013420576 -d mpx.vmhba32:C0:T0:L0 -d mpx.vmhba33:C0:T0:L0 -d mpx.vmhba34:C0:T0:L0 -d mpx.vmhba40:C0:T0:L0

vsan-usb-disk-4
If we take a look a the VSAN configurations in the vSphere Web Client, we can see that we now have 4 USB-based disks contributing storage to the VSAN Disk Group. In this particular configuration, I was using my Mac Mini which has 4 x USB 3.0 devices that are connected and providing the "MD" disks and one of the internal drives that has an SSD. Ideally, you would probably want to boot ESXi from a USB device and then claim one of the internal drives along with 3 other USB devices for the most optimal configuration.

vsan-usb-disk-5
As a bonus, there is one other nugget that I discovered while testing out the USB-based disks for VSAN 6.0 which is another hidden option to support iSCSI based disks with VSAN. You will need to enable the option called /VSAN/AllowISCSIDisks using the same method as enabling USB-based disk option. This is not something I have personally tested, so YMMV but I suspect it will allow VSAN to claim an iSCSI device that has been connected to an ESXi host and allow it to contribute to a VSAN Disk Group as another way of providing additional capacity to VSAN with platforms that have restricted number of disk slots. Remember, neither of these solutions should be used beyond home labs and they are not officially supported by VMware, so do not bother trying to do anything fancy or running performance tests, you are just going to let your self down and not see the full potential of VSAN 🙂

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXCLI, ESXi, Home Lab, Not Supported, VSAN, vSphere 6.0 Tagged With: AllowISCSIDisks, AllowUsbDisks, apple, esxcli, mac mini, usb, Virtual SAN, VSAN, vSphere 6.0

Thunderbolt Storage for ESXi

01/21/2015 by William Lam 45 Comments

Screen Shot 2015-01-20 at 9.11.51 PMA question that I frequently receive is whether ESXi supports Thunderbolt-based storage devices. This is especially interesting for folks running ESXi on an Apple Mac Mini due to the limited number of IO connections the Mac Minis' have. If you look on VMware's HCL, you will not find any supported Thunderbolt Storage devices nor are there any that are being actively tested with ESXi, at least as far as I know.

Having said that, generally speaking from an ESXi host point of view, the Thunderbolt interface is just seen as an extended PCIe bus. This means that whatever storage device is connected on the other end can work with ESXi as long as there is a driver in ESXi that can communicate with that devices. This is analogous to having a RAID card and having the proper device driver on ESXi to see its storage.

Even though VMware is not actively testing Thunderbolt-based storage devices, there are a few folks out in the community who have and have been successful. I wanted to share these stories with the community for those that might be interested in this topic and hopefully others who have had similar success can also share more details about their setup.

UPDATE (09/12/16) - ESXi Thunderbolt Driver to Fibre Channel Storage from ATTO

Disclaimer: All solutions listed below are from the community and decisions to purchase based on these solutions will be at your own risk. I hold no responsibility if the listed solutions do not work for whatever reason.

Solution #1 - Pegaus R6 Thunderbolt Storage Enclosure

This was the first Thunderbolt storage device that I had ever seen confirmed publicly to work with ESXi after installing a STEX driver VIB. You can find more details here.

Solution #2 - Sonnet Echo Express III-R Rackmount Thunderbolt 2 Expansion Chassis & RacMac Mini Enclosure

This next solution was recently shared with me from Marc Huppert who has recently expanded his home lab. Marc combined a Thunderbolt expansion chassis with a Mac Mini chassis to exposed Fibre Channel storage to his Mac Minis. You can find more details here.

Solution #3 - xMac Mini Server Enclosure

I came across this solution while searching online which also uses another Mac Mini Thunderbolt expansion chassis connected to Fibre Channel based storage. You can find more details here.

Solution #4 - Sonnet xMac Pro Server Enclosure

Thanks to Joshua for sharing his solution. You can find more details in the comments here.

Solution #5 - LaCie Rugged Thunderbolt drives

Thanks to Philip for sharing his solution. You can find more details in the comments here.

Solution #6 - ARC-8050T2 Thunderbolt 2 RAID

Thanks to Jason for sharing his solution. You can find more details in the comments here.

Solution #7 - Another Sonnet xMac Pro Server Enclosure + EMC VNX

Thanks to Johann for sharing his solution. You can find more details here.

Solution #8 - LaCie Little Big Disk Thunderbolt 2 with 2013 Mac Pro w/ESXi 6.0

Thanks to Thomas for sharing his solution. You can find more details here.

Solution #9 - Sonnet Echo Express III with Mac Pro 6,1 and ATTO ExpressSAS H680 w/ESXi 6.0

Thanks to Grasshopper for sharing details here and here.

Solution #10 - OWC ThunderBay 4 RAID 4-Bay External Drive w/Dual Thunderbolt 2

Thanks to Gregg Green for sharing his use of the Thunderbay with Mac Mini 2012

If there are other Thunderbolt-based storage devices that you or others have had success with ESXi, feel free to leave a comment with details and I will add it to the post. If there are any Thunderbolt storage device vendors that would like to send me a demo unit, I would be more than happy to give the system a test to see if it works with ESXi 🙂

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, Home Lab Tagged With: apple, mac mini, mac pro, thunderbolt

A killer custom Apple Mac Mini setup running VSAN

10/21/2014 by William Lam 12 Comments

*** This is a guest blog post from Peter Bjork ***

The first time I was briefed on VMware VSAN, I fell in love. I finally knew how I would build my Home Lab.

Let me first introduce myself, my name is Peter Björk and I work at VMware as Lead Specialist within the EMEA EUC Practice. I fortunately have the opportunity to limit my focus on a very few products and truly specialize in these. I cover two products; VMware ThinApp and VMware Workspace Portal and one feature; the Application Publishing feature of VMware Horizon 6. I’m an End-User application kind of guy. That said, you should understand that I’m far from your ESXi and vSphere expert. If you want to keep up with the latest news in the VMware End-User Computing space make sure to follow me on Twitter, my handle is @thepeb. When I’m not a guest blogger, I frequently blog on the official ThinApp and Horizon Tech blogs.

In my role I produce a lot of blog posts and internal enablement material. I perform many tests using early code drops and on a daily basis I run my home lab to deliver live demos. I need a Home Lab that I can trust and that supports all my work tasks. I started building my lab many years ago. It all started with a single mid tower white box, but pretty soon I ran into resource constraints. I started to investigate what my next upgrade would look like.

I had a few requirements:

  • Keep the noise down
  • Shouldn’t occupy that much space
  • Should be affordable
  • Modular, I do not have money to buy everything upfront so it should be something I could build on top of.
  • Should be able to run VMware ESXi/vSphere
  • Should be cool

Being an Apple junky for many years, my eyes soon fell on the Apple Mac Minis and I stumbled over this great blog by William Lam that you are reading right now. At the same time I started to hear about VSAN and my design was pretty much decided. I was going to build a Mac Mini cluster using VSAN as storage. While I have Synology NAS I only use it for my private files. It is not used in my home lab and for reasons I can not really explain I wanted to keep it separate and use a separate storage solution for my home lab.

Now that I have decided to build my home lab, I went and bought my first Mac Mini. To keep cost down I found two used late 2012 models with i7 CPUs. Since VSAN requires one SSD and one HDD I had to upgrade them using the OWC Data Doubler Mounting Kit. I also upgraded the memory to 16GB RAM in each Mac Mini. This setup gave me some extra resources and together with my old Tower Server I could start building my VSAN Cluster. I started with the VSAN beta. I quickly realized that VSAN didn’t support my setup. I waited for the GA release of VSAN and on the release date I decided to go for a pure Mac Mini VSAN setup so I stole my families HTPC which was a late 2012 Mac Mini model with a i5 CPU. (I managed to get away with it because I replaced it with an Apple-TV.) I took one HDD and the SSD from my old Tower Server and put it into the i5 Mac Mini. While I managed to get VSAN up and running it was only running for an hour or so before I lost one disk in my VSAN setup. I recovered the disk back up through a simple reboot but then the next disk went down. The reason for the instability is that the GA release of VSAN did not support the AHCI controller. Hugely disappointed I had to run my home lab on local attached storage and my dreams of VSAN was just that, dreams. In all my eagerness I’ve already migrated the majority of my VMs onto the VSAN Datastore so I pretty much lost my entire home lab.

After complaining to my colleagues, I found out that AHCI controller support for VSAN was coming in vSphere 5.5 U2. I heard it was likely to solve my problems. So the 9th October came and vSphere 5.5u2 was finally here. To my joy, my three Mac Minis were finally able to run VSAN and it was completely stable.

Lets take a closer look at my setup. Below is an overview of the setup and how things are tied together.

Home Lab Picture
My VSAN Datastore houses most of my VMs. My old Tower Server is connected to the VSAN Datastore but does not currently contribute any storage. On the Tower Server I host my management VMs. Since I got burned loosing all my VMs, I made sure I keep my management VMs on a local disk in the Tower Server. Since my environment has been running quite stable for nearly two weeks now I’m considering migrating all of my VMs onto the VSAN Datastore.

I have noticed one issue so far which is with my i5 based Mac Mini. One day it was reporting not connected in the vCenter Server. The machine was running but I got a lot of timeouts when I pinged it. While I was thinking about rebooting the host it showed up as connected again and since then I’ve not noticed any other issues. I suspect the i5 CPU isn’t powerful enough to host a couple VMs and being a part of the VSAN. When I saw it disappear it might have been running some heavy workloads. So with this in mind I would recommend running i7 Mac Minis and leave the i5 models for HTPC work loads :).

Another thing I’ve noticed is that the Mac Minis are running quite hot. There is no power saving functionality that is active and my small server room doesn’t have cooling. The room is constantly around 30-35 degrees Celsius (86-95F) but the gear just keeps on running. The only time I got a little bit worried was when the room’s temperature peaked at 45 degrees Celsius (113F), for Sweden, that is an exceptionally warm summer day. Leaving the door to the room opened for a while helps cool things down. I’m quite impressed by the Mac Minis and how durable they are. My first two Mac Minis have been running like this for well over a year now.

IMG_2810
Here’s a picture of my server room. While I do have UPS there is no cooling or windows so the room tends to be quite warm. Stacking the Mac Minis on top of each other doesn’t really help cooling either. When I started stacking my Mac Minis on top of each other I realized how stupid it would be to have three separate power cords. So I ended up creating a custom Y-Y-Y-cable (last Y is for future expansion).

IMG_2807

IMG_2809

Y-cable inside

Y connector
The power cord is a simple lamp cable (0.75mm2) that has the three original Apple power cables butchered together. The Y-connector was found in a local Swedish hardware store. Since the Mac Mini’s maximum continuous power consumption is 85W, a 0.75mm2 cable would work perfectly. A 2 meter (6.56 feet) long 0.75mm2 cable is able to support at least 3A. My three Mac Minis only consume 1.1A (3x85W / 230V = 1.1A). In 120V countries you would have 2.5-3A running through the cable but this wouldn’t be a problem.

Since the Mac Minis only have a single onboard NIC and I wanted to have two physically separated networks I had to get an Ethernet Thunderbolt NIC. As shown in the overview picture I’m am running both VSAN traffic and VM traffic over the same NIC. This is probably not ideal from a performance point of view but for my EUC related workloads I’ve not noticed any performance bottlenecks. On the other hand, I’m very pleased with the performance and with the benefits of having shared storage, so things like DRS and vMotion can deal with the balance between my hosts and I’m super happy with this setup.

I found that the easiest way was to use VMware Fusion to install ESXi onto a USB key. Then I simply plug in the USB key in my Mac Mini and I’m up and running. I need to use external monitor and keyboard to configure the ESXi initially.

As for the next steps I’m planning on getting an SSD and an extra HDD for my Tower Server. This would allow my Tower Server to participate in the VSAN Cluster and contribute additional capacity. If the opportunity arises and I can find another Mac Mini with an i7 CPU for a decent price I would also like to replace the i5. Other than that, I don’t think I need much else. Well, I could always use a little bit more RAM of course (who doesn’t) but disk and CPU runs very low all the time.

VC
Technical details:

  • All Mac Minis are late 2012 models
    • All SSD disks are different models and vendors. Their capacity ranges from 120 to 250GB. Since I’ve had a couple SSD crashes I made sure to purchase the more heavy-duty models offered from the vendors. But none of them are designed for constant use in servers.
    • All Mac Minis have 16GB RAM (2x8GB)
    • I have 1TB HDD in my two i7 Mac Minis and 500GB HDD in the i5 one.
  • ESXi installed on USB key
  • My Tower Server specs are:
    • Supermicro Xeon E3 motherboard, uATX (X9SCM-F-B)
    • Intel Xeon E3-1230 3.2GHz QuadCore HT, 8MB
    • 4x8GB 1333MHz DDR3 ECC
    • Barracuda 500GB, 7200rpm, 16MB, SATA 6Gb/s

To wrap up, I’m very pleased with the setup I‘ve built. It works perfectly for my needs. Lastly, I do recommend having a separate management host, as I found it extremely useful when I had to move VMs back and forth to test earlier releases of VSAN. I also recommend going for the i7 CPU models of Mac Mini for better performance.

Download the VMware ESXi 5.5u2 Mac Mini ISO from virtuallyGhetto:

  • https://mega.nz/#!EJNSFJyb!hm-AWAiqEisDnMV9XpZphSLn_puJLu9RTep9R83N6rY

Apple Thunderbolt Ethernet Adapter VIB:

  • https://s3.amazonaws.com/virtuallyghetto-download/vghetto-apple-thunderbolder-ethernet.vib

UPDATE (01/15/15) - Peter just shared with me a new custom Mac Mini Rack that he had built and welded together, check out the pictures below to see what it looks like.

mac-mini-rack-1 mac-mini-rack-2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, ESXi, Home Lab, VSAN, vSphere Tagged With: apple, esxi 5.5, mac mini, VSAN, vSphere 5.5

New VMware Fling to improve Network/CPU performance when using Promiscuous Mode for Nested ESXi

08/28/2014 by William Lam 44 Comments

I wrote an article awhile back Why is Promiscuous Mode & Forged Transmits required for Nested ESXi? and the primary motivation behind the article was in regards to an observation a customer made while using Nested ESXi. The customer was performing some networking benchmarks on their physical ESXi hosts which happened to be hosting a couple of Nested ESXi VMs as well as regular VMs. The customer concluded in his blog that running Nested ESXi VMs on their physical ESXi hosts actually reduced overall network throughput.

UPDATE (04/24/17) - Please have a look at the new ESXi Learnswitch which is an enhancement to the existing ESXi dvFilter MAC Learn module.

UPDATE (11/30/16) - A new version of the ESXi MAC Learning dvFilter has just been released to support ESXi 6.5, please download v2 for that ESXi release. If you have ESXi 5.x or 6.0, you will need to use the v1 version of the Fling as it is not backwards compat. You can all the details on the Fling page here.

This initially did not click until I started to think about this a bit more and the implications when enabling Promiscuous Mode which I think is something that not many of us are not aware of. At a very high level, Promiscuous Mode allows for proper networking connectivity for our Nested VMs running on top of a Nested ESXi VMs (For the full details, please refer to the blog article above). So why is this a problem and how does this lead to reduced network performance as well as increased CPU load?

The diagram below will hopefully help explain why. Here, I have a single physical ESXi host that is connected to either a VSS (Virtual Standard Switch) or VDS (vSphere Distributed Switch) and I have a portgroup which has Promiscuous Mode enabled and it contains both Nested ESXi VMs as well as regular VMs. Lets say we have 1000 Network Packets destined for our regular VM (highlighted in blue), one would expect that the red boxes (representing the packets) will be forwarded to our regular VM right?

nested-esxi-prom-new-01
What actually happens is shown in the next diagram below where every Nested ESXi VM as well as other regular VMs within the portgroup that has Promiscuous Mode enabled will receive a copy of those 1000 Network Packets on each of their vNICs even though they were not originally intended for them. This process of performing the shadow copies of the network packets and forwarding them down to the VMs is a very expensive operation. This is why the customer was seeing reduced network performance as well as increased CPU utilization to process all these additional packets that would eventually be discarded by the Nested ESXi VMs.

nested-esxi-prom-new-02
This really solidified in my head when I logged into my own home lab system which I run anywhere from 15-20 Nested ESXi VMs at any given time in addition to several dozen regular VMs just like any home/development/test lab would. I launched esxtop and set the refresh cycle to 2seconds and switched to the networking view. At the time I was transferring a couple of ESXi ISO’s for my kicskstart server and realized that ALL my Nested ESXi VMs got a copy of those packets.

nested-esxi-mac-learning-dvfilter-0
As you can see from the screenshot above, every single one of my Nested ESXi VMs was receiving ALL traffic from the virtual switch, this definitely adds up to a lot of resources being wasted on my physical ESXi host which could be used for running other workloads.

I decided at this point to reach out to engineering to see if there was anything we could do to help reduce this impact. I initially thought about using NIOC but then realized it was primarily designed for managing outbound traffic where as the Promiscuous Mode traffic is all inbound and it would not actually get rid of the traffic. After speaking to a couple of Engineers, it turns out this issue had been seen in our R&D Cloud (Nimbus) which provides IaaS capabilities to the R&D Organization for quickly spinning up both Virtual/Physical instances for development and testing.

Christian Dickmann was my go to guy for Nimbus and it turns out this particular issue has been seen before. Not only has he seen this behavior, he also had a nice solution to fix the problem in the form of an ESXi dvFilter that implemented MAC Learning! As many of you know our VSS/VDS does not implement MAC Learning as we already know which MAC Addresses are assigned to a particular VM.

I got in touch with Christian and was able to validate his solution in my home lab using the latest ESXi 5.5 release. At this point, I knew I had to get this out to the larger VMware Community and started to work with Christian and our VMware Flings team to see how we can get this released as a Fling.

Today, I am excited to announce the ESXi Mac Learning dvFilter Fling which is distributed as an installable VIB for your physical ESXi host and it provides support for ESXi 5.x & ESXi 6.x

esxi-mac-learn-dvfilter-fling-logo
Note: You will need to enable Promiscuous Mode either on the VSS/VDS or specific portgroup/distributed portgroup for this solution to work.

You can download the MAC Learning dvFilter VIB here or you can install directly from the URL shown below:

To install the VIB, run the following ESXCLI command if you have VIB uploaded to your ESXi datastore:

esxcli software vib install -v /vmfs/volumes/<DATASTORE>/vmware-esx-dvfilter-maclearn-0.1-ESX-5.0.vib -f

To install the VIB from the URL directly, run the following ESXCLI command:

esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi-mac-learning-dvfilter/vmware-esx-dvfilter-maclearn-1.0.vib -f

A system reboot is not necessary and you can confirm the dvFilter was successfully installed by running the following command:

/sbin/summarize-dvfilter

You should be able see the new MAC Learning dvFilter listed at the very top of the output.

nested-esxi-mac-learning-dvfilter-2
For the new dvFilter to work, you will need to add two Advanced Virtual Machine Settings to each of your Nested ESXi VMs and this is on a per vNIC basis, which means you will need to add N-entries if you have N-vNICs on your Nested ESXi VM.

    ethernet#.filter4.name = dvfilter-maclearn
    ethernet#.filter4.onFailure = failOpen

This can be done online without rebooting the Nested ESXi VMs if you leverage the vSphere API. Another way to add this is to shutdown your Nested ESXi VM and use either the “legacy” vSphere C# Client or vSphere Web Client or for those that know how to append and reload the .VMX file as that’s where the configuration file is persisted
on disk.

nested-esxi-mac-learning-dvfilter-3
I normally provision my Nested ESXi VMs with 4 vNICs, so I have four corresponding entries. To confirm the settings are loaded, we can re-run the summarize-dvfilter command and we should now see our Virtual Machine listed in the output along with each vNIC instance.

nested-esxi-mac-learning-dvfilter-4
Once I started to apply this changed across all my Nested ESXi VMs using a script I had written for setting Advanced VM Settings, I immediately saw the decrease of network traffic on ALL my Nested ESXi VMs. For those of you who wish to automate this configuration change, you can take a look at this blog article which includes both a PowerCLI & vSphere SDK for Perl script that can help.

I highly recommend anyone that uses Nested ESXi to ensure you have this VIB installed on all your ESXi hosts! As a best practice you should also ensure that you isolate your other workloads from your Nested ESXi VMs and this will allow you to limit which portgroups must be enabled with Promiscuous Mode.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Home Lab, Nested Virtualization, vSphere, vSphere 6.0 Tagged With: dvFilter, esxi, fling, mac learning, nested, nested virtualization, promiscuous mode, vib

VMworld vBrownBag Tech Talk : Nested Virtualization & Dev/Test/Home Lab Panel

08/12/2014 by William Lam 4 Comments

VMworld is only a couple of weeks away and I can not believe this will be my 7th VMworld! My, how time has flown by so quickly. I have been pretty busy these last couple of months finishing up some internal projects as well as starting up a couple of new ones. I had been thinking about submitting a vBrownBag Tech Talk as I have done in past years, but there has just been too much going on. Giving it some more thought, I thought it would be cool to put together a panel of community folks to discuss some of my favorite topics like Nested Virtualization as well as Development/Test and Home Labs.

I am please to announce the VMworld vBrownBag Tech Talk : Nested Virtualization & Dev/Test/Home Lab Panel which will include Sean Crookston, Doug Baer, Nick Marshall and myself as the panelists. I was originally hoping to have a few more folks from the community, but due to the late submission, we ran into scheduling conflicts. I am very excited for this session which will take place on Wednesday, August 27th from 11:45am to 12:15pm (30minutes). I wanted to give a huge shout out to Sean Massey who was originally scheduled to present right after ours but decided to offer us his time slot as 15minutes was going to be tough for a panel discussion. Much appreciated Sean!

Due to the short amount of time, we really want to make the most out of this session and most importantly, make this as interactive as possible with the audience. We would like to collect any questions or topics that folks might be interested and we will pick a couple for the panelists to answer or discuss. We will also have topics that we may raise but it would be much more interesting to hear from you! Please leave a comment if you wish to ask a question and perhaps those that get selected, may even win a prize?

We hope to see you at the Tech Talk and lastly, this is going to be a MUST attend session ... that's all I can really say 🙂

BTW - I also would like to give a shout out to Doug Baer who will be running a VMware Knowledge Expert discussion related to the HOL Lab Environment on Tuesday, August 26 at 1pm PST. Though his focus will primarily be HOL, but as many of you know the underlying technology is Nested Virtualization. A couple of us will also be attending that session, so if there are any questions you would like to ask but did not get a chance to during the Tech Talk, you can also find us there.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Home Lab, Nested Virtualization Tagged With: nested, nested virtualization, vBrownBag, vmworld

Quick Tip – Minimum amount of memory to run the vCenter Server Appliance

08/19/2013 by William Lam 15 Comments

I thought this might have been common knowledge, but after chatting with a VMware colleague who recently rebuilt his home lab, I realized it may not be the case. The vCenter Server Appliance (VCSA) is distributed as a virtual appliance and by default it is configured for 8GB of memory. However, this is definitely NOT the "minimum" amount of memory required to have a fully functional vCenter Server.

It looks like some people are just downloading the vCenter Server appliance and just sticking with the defaults of 8GB of memory which for a home lab is quite a large footprint, especially given you will probably want to install other virtual machines. The actual minimum for vCenter Server (Windows or Linux) is just 4GB and technically speaking, you can even get away with just 3GB for the vCenter Server Appliance (anything less, the system is extremely slow and unusable).

Here is a quick screenshot showing vCenter Server Appliance running with only 3GB of memory:

VMware also has a KB article detailing the minimum requirements for the vCenter Server Appliance based on the number of virtual machines and hosts you plan on running. For my home lab, I normally stick with the 4GB of memory and I have not had any issues. Hopefully this tip will help you save some memory for either your lab or even production environment for other workloads.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Home Lab, VCSA, vSphere Tagged With: memory, vcenter server appliance, vcsa, vcva

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 11
  • Go to page 12
  • Go to page 13
  • Go to page 14
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy