• Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • Nested Virtualization
  • VCSA
  • VSAN

homelab

Supermicro Home Lab Group Buy

01/02/2019 by William Lam 25 Comments

Happy New Years everyone! I was supposed to get this out right before the holidays but #babylam got really sick and I had to put this on hold.

Back in November I threw out an idea on Twitter to see if the #vCommunity would be interested in doing a group buy for some Supermicro kits, especially for those looking to upgrade their personal home labs to take advantage of all the new VMware goodies such as vSAN, NSX and PKS for example.

Just thinking out loud … but would the #VMware Home Lab Community be interested in a potential Group Buy for Supermicro gear? Could be bare-bones chassis or some package configuration with memory + storage?

— William Lam (@lamw) November 14, 2018

Within minutes, I had several dozen replies and it was clear that folks were definitely interested in refreshing their lab, especially with a smaller and more modern platform. Over the last few weeks, I have been working with MITXPC (who I have worked with before) on putting together some packages that would appeal to the majority of the community. Initially, I was thinking about three options: system-only (no memory/storage), system with memory (no storage) and system with memory and storage. To be clear, system means complete chassis with CPU and motherboard included. Please see the product links below for more details. 

Disclaimer: I am not affiliated with MITXPC nor am I receiving any referral bonus/compensation for the discounts listed below.

[Read more...] about Supermicro Home Lab Group Buy

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VSAN, vSphere Tagged With: E200-8D, E300-9D, homelab, Supermicro

Supermicro E300-9D (SYS-E300-9D-8CN8TP) is a nice ESXi & vSAN kit

11/23/2018 by William Lam 18 Comments

Supermicro kits such as the E200-8D is a very popular platform amongst the VMware community and with powerful Xeon-based CPUs and support for up to 128GB of memory, it is perfect for running a killer vSphere/vSAN setup!

Earlier this Fall, Supermicro released a "big daddy" version to the E200-8D, dubbed E300-9D and specifically, I want to focus on the 8-Core model (SYS-E300-9D-8CNTP) as this system actually listed on the VMware HCL for ESXi! The E300-9D can support up to half a terabyte of memory and with the 8-Core model, you have access to 16 threads. The E200-8D is also a supported platform by VMware, you can find the VMware HCL listing here.


I was very fortunate to get my hands on a loaner E300-9D (8-Core) unit, thanks to Eric and his team at MITXPC, a local bay area shop specializing in embedded solutions. In fact, they even provided a nice vGhetto promo discount code for my readers awhile back, so definitely check it out if you are in the market for a new lab. As an aside, when doing a quick search online, they also seem to be the only ones actually selling the E300-9D (8-Core) system which you can find here and in general, they seem to be priced fairly competitively. This is not an endorsement for MITXPC, but recommend folks to compare all prices when shopping online, especially as today is Black Friday in the US and Cyber Monday is just a few days away.

[Read more...] about Supermicro E300-9D (SYS-E300-9D-8CN8TP) is a nice ESXi & vSAN kit

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, VSAN, vSphere Tagged With: E200-8D, E300-9D, esxi, homelab, Supermicro, VSAN, vSphere

VMworld Hackathon Hardware/Software BOM

10/03/2017 by William Lam 7 Comments

I know many of you have been asking about the hardware setup that we had used in this years VMworld Hackathon. I finally got a chance to document the details and you can find the complete hardware and software BOM below. For VMworld US, we had two different HW configurations, one for the primary Hackathon which was also re-used for VMworld Europe but we also had another configuration for the Hackathon Training sessions which was new this year. For VMworld Europe, we re-used the primary Hackathon hardware, but we also had the opportunity to take advantage of the new VMware Cloud on AWS offering and built a similiar configuration that teams could also remotely connect to as well. The only difference between the on-premises hardware and VMWonAWS, is the latter required users to RDP to a Windows jump host. Both options were provided and teams could select either environment to use.

Note: Internally, CDW is one of our vendors for purchasing hardware/software and that is why there are links directly to their site. However, you may find better pricing by looking online, especially Amazon which majority of the components are cheaper except for the server which you can get an exclusive vGhetto Discount at MITXPC. I have added links to both CDW/Amazon where applicable and I recommend doing research to find the best pricing if you are on a budget.

Here is a picture of the setup at VMworld US:


Here is a picture of the setup at VMworld EU:

[Read more...] about VMworld Hackathon Hardware/Software BOM

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: VMware Cloud on AWS, VMworld, VSAN, vSphere 6.5 Tagged With: Hackathon, homelab, Supermicro, VMC, VMware Cloud on AWS, VMWonAWS, vmworld

Exclusive vGhetto discount on homelab hardware from MITXPC

04/12/2017 by William Lam 3 Comments

On a regular basis I already receive a number of inquires from both internal VMware folks as well as external partners and customers about VMware homelabs and the type of hardware that can be used. After demo'ing our recent USB to SDDC project, the requests have literally tripled! Most folks are generally inquiring BOM details and/or where to purchase the Intel NUC or the SuperMicro E200-8D.

In particular, the SuperMicro E200-8D has probably received the most amount of interest lately. In fact, I am also interested in one after having an opportunity to play with one during the Melbourne VMUG. One thing I had noticed while talking to several colleagues who have purchased this system both locally within the Bay Area as well as overseas such as Australia was that one particular reseller kept coming up over and over again. That vendor was MITXPC which is a local bay area company located over in Fremont which specializes in Mini-ITX systems.

The reason MITXPC was being used by the majority of these folks was simple, they had the best price for the SuperMicro E200-8D which was significantly cheaper than other vendors including Amazon.

Vendor Price
E200-8D on MITXPC $799 USD ($783.02 w/discount code)
E200-8D on Amazon $849 USD

Having heard good things about MITXPC, I decided to reach out to them and see if there was anything special they could do for the VMware Community. I was able to get a special discount code that would offer folks an additional 2% off their entire purchase at MITXPC. For those of you who have been holding off on a refresh your home lab or itching to build your own, this is a great time! If you would like to take advantage of this offer, simply use the discount code VIRTUALLYGHETTO2OFF when you check out. I would like to give a huge thanks to Eric Yui of MITXPC for working with me on this and helping out the VMware Community.

Disclaimer: I am not affiliated with MITXPC.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Uncategorized Tagged With: homelab, Intel NUC, Supermicro, VSAN

vGhetto Automated vSphere Lab Deployment for vSphere 6.0u2 & vSphere 6.5

11/21/2016 by William Lam 77 Comments

For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂

Changelog:

  • 11/22/16
    • Automatically handle Nested ESXi on vSAN
  • 01/20/17
    • Resolved "Another task in progress" thanks to Jason M
  • 02/12/17
    • Support for deploying to VC Target
    • Support for enabling SSH on VCSA
    • Added option to auto-create vApp Container for VMs
    • Added pre-check for required files
  • 02/17/17
    • Added missing dvFilter param to eth1 (missing in Nested ESXi OVA)
  • 02/21/17 (All new features added only to the vSphere 6.5 Std deployment)
    • Support for deploying NSX 6.3 & registering with vCenter Server
    • Support for updating Nested ESXi VM to ESXi 6.5a (required for NSX 6.3)
    • Support for VDS + VXLAN VMkernel configuration (required for NSX 6.3)
    • Support for "Private" Portgroup on eth1 for Nested ESXi VM used for VXLAN traffic (required for NSX 6.3)
    • Support for both Virtual & Distributed Portgroup on $VMNetwork
    • Support for adding ESXi hosts into VC using DNS name (disabled by default)
    • Added CPU/MEM/Storage resource requirements in confirmation screen
  • 04/18/18
    • New version of the script vsphere-6.7-vghetto-standard-lab-deployment.ps1 to support vSphere 6.7
    • Added support for vCenter Server 6.7, some of the JSON params have changed for consistency purposes which needed to be updated
    • Added support for new Nested ESXi 6.7 Virtual Appliance (will need to download that first)
    • vMotion is now enabled by default on vmk0 for all Nested ESXi hosts
    • Added new $enableVervoseLoggingToNewShell option which spawns new PowerShell session to provide more console output during VCSA deploy. FR by Christian Mohn
    • Removed dvFilter code, since thats now part of the Nested ESXi VA

Requirements:

  • 1 x Physical ESXi host OR vCenter Server running at least ESXi 6.0 Update 2 or greater
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance OVA
  • vCenter Server Appliance (VCSA) 6.0 or 6.5 extracted ISO
  • NSX 6.3 OVA (optional)
    • ESXi 6.5a offline patch bundle

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

vsphere-6-5-vghetto-lab-deployment-0
For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.

Scripts:

There are four different scripts which covers the scenarios we discussed above:

  • vsphere-6.0-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.0-vghetto-standard-lab-deployment.ps1
  • vsphere-6.5-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.5-vghetto-standard-lab-deployment.ps1
  • vsphere-6.7-vghetto-standard-lab-deployment.ps1

Instructions:

Please refer to the Github project here for detailed instructions.

Verification:

Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:

Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:


Here is an example of running a vSphere 6.5 "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-2
If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-5
or the following for "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-6
I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, PowerCLI, VCSA, vSphere 6.0, vSphere 6.5 Tagged With: homelab, Nested ESXi, nested virtualization, powercli, VCSA 6.5, vSphere 6.5

Functional USB 3.0 Ethernet Adapter (NIC) driver for ESXi 5.5 & 6.0

03/28/2016 by William Lam 79 Comments

Earlier this month I wrote an article demonstrating a functional USB ethernet adapter for ESXi 5.1. This was made possible by using a custom built driver for ESXi that was created over three years ago by a user named Trickstarter. After having re-discovered the thread several years later, I had tried reaching out to the user but concluded that he/she has probably moved on given the lack of forum activity in the recent years. Over the last few weeks I have been investigating to see if it was possible to compile a new version of the driver that would function with newer versions of ESXi such as our 5.5 and 6.0 release.

UPDATE (02/12/19) - A new VMware Native Driver for USB-based NICs has just been released for ESXi 6.5/6.7, please use this driver going forward. If you are still on ESXi 5.5/6.0, you can continue using the existing driver but please note there will be no additional development in the existing vmklinux-based driver.

UPDATE (01/22/17) - For details on using a USB-C / Thunderbolt 3 Ethernet Adapter, please see this post here.

UPDATE (11/17/16) - New driver has been updated for ESXi 6.5, please find the details here.

After reaching out to a few folks internally, I was introduced to Songtao Zheng, a VMware Engineer who works on some of our USB code base. Songtao was kind enough to provide some of assistance in his spare time to help with this non-sanction effort that I was embarking on. Today, I am please to announce that we now have a functional USB ethernet adapter driver based on the ASIX AX88179 that works for both ESXi 5.5 and 6.0. This effort could not have been possible without Songtao and I just want to say thank you very much for all of your help and contributions. I think it is safe to say that the overall VMware community also thanks you for your efforts. This new capability will definitely enable new use cases for vSphere home labs that were never possible before when using platforms such as the Intel NUC or Apple Mac Mini for example. Thank you Songtao! I would also like to extend an additional thank you to Jose Gomes, one of my readers, who has also been extremely helpful with his feedback as well as assistance on testing the new drivers.

Now, Before jumping into the goods, I do want to mention there are a few caveats to be aware of and that I think it is important to understand them before making any purchasing decisions.

  • First and foremost, this is NOT officially supported by VMware, use at your own risk.
  • Secondly, we have observed there is a substantial difference in transfer speeds between Transmit (Egress) and Receive (Ingress) traffic which may or may not be acceptable depending on your workload. On Receive, the USB network adapter is performing close to a native gigabit interface. However, on Transmit, the bandwidth mysteriously drops by ~50% which includes very inconsistent transfer speeds. We are not exactly sure why this is the case, but given ESXi does not officially support USB based ethernet adapters, it is possible that the underlying infrastructure was never optimized for such devices. YMMV
  • Lastly, for the USB ethernet adapter to properly function, you will need a system that supports USB 3.0 which kind of makes sense for this type of a solution to be beneficial in the home lab. If you have a system with USB 2.0, the device will probably not work at least from testing that we have done.

Note: For those interested in the required source code changes to build the AX88179 driver, I have published all of the details on my Github repo here.

Disclaimer: In case you some how missed it, this is not officially supported by VMware. Use at your own risk.

Without further ado, here are the USB 3.0 gigabit ethernet adapters that are supported with the two drivers:

  • StarTech USB 3.0 to Gigabit Ethernet NIC Adapter
  • StarTech USB 3.0 to Dual Port Gigabit Ethernet Adapter NIC with USB Port
  • j5create USB 3.0 to Gigabit Ethernet NIC Adapter (verified by reader Sean Hatfield 03/29/16)
  • Vantec CB-U300GNA USB 3.0 Ethernet Adapter (verified by VMware employee 05/19/16)
  • DUB-1312 USB 3.0 Gigabit Ethernet Adapter (verified by twitter user George Markou 07/29/16)

Note: There may be other USB ethernet adapters that uses the same chipset which could also leverage this driver but these are the only two that have been verified.

usbnic
Here are the ESXi driver VIB downloads:

  • ESXi 5.5 Update 3 USB Ethernet Adapter Driver VIB or ESXi 5.5 Update 3 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.0 Update 2 USB Ethernet Adapter Driver VIB or ESXi 6.0 Update 2 USB Ethernet Adapter Driver Offline Bundle
  • ESXi 6.5 USB Ethernet Adapter Driver VIB or ESXi 6.5 USB Ethernet Adapter Driver Offline Bundle

Note: Although the drivers were compiled against a specific version of ESXi, they should also work on the same major version of ESXi, but I have not done that level of testing and YMMV.

Verify USB 3.0 Support

As mentioned earlier, you will need a system that is USB 3.0 capable to be able to use the USB ethernet adapter. If you are unsure, you can plug in a USB 3.0 device and run the following command to check:

lsusb

usb3nic-0
What you will be looking for is an entry stating "Linux Foundation 3.0 root hub" which shows that ESXi was able to detect a USB 3.0 port on your system. Secondly, look for the USB device you just plugged in and ensure the "Bus" ID matches that of the USB 3.0 bus. This will tell you if your device is being claimed as a USB 3.0 device. If not, you may need to update your BIOS as some systems may have USB 2.0 enabled by default like earlier versions of Intel NUC as desribed here. You may also be running pre-ESXi 5.5 which did not support USB 3.0 as mentioned here, so you may need to upgrade your ESXi host to at least 5.5 or greater.

Install Driver

You can either install the VIB directly onto your ESXi host or by creating a custom ESXi ISO that includes the driver using a popular tool like ESXi Customizer by Andreas Peetz.

To install the VIB, upload the VIB to your ESXi host and then run the following ESXCLI command specifying the full path to the VIB:

esxcli software vib install -v /vghetto-ax88179-esxi60u2.vib -f

usb3nic-1
Lastly, you will need to disable the USB native driver to be able to use this driver. To do so, run the following command:

esxcli system module set -m=vmkusb -e=FALSE

You will need to reboot for the change to go into effect.

To verify that the USB network adapter has been successfully claimed, run either of the following commands to list your physical NICs:

esxcli network nic list
esxcfg-nics -l

usb3nic-2
To add the USB uplink, you will need to either use the vSphere Web Client or ESXCLI to add the uplink to either a Virtual or Distributed Virtual Switch.

usb3nic-3
To do so using ESXCLI, run the following command and specify the name of your vSwitch:

esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

Uninstall Driver

To uninstall the VIB, first make sure to completely unplug the USB network adapter from the ESXi first. Next, run the following ESXCLI command which will automatically unload the driver and remove the VIB from your ESXi host:

esxcli software vib remove -n vghetto-ax88179-esxi60u2

Note: If you try to remove the VIB while the USB network adapter is still plugged in, you may hang the system or cause a PSOD. Simply reboot the system if you accidentally get into this situation.

Troubleshooting

If you are not receiving link on the USB ethernet adapter, it is most likely that your system does not support USB 3.0. If you find the a similar message like the one below in /var/log/vmkernel.log then you are probably running USB 1.0 or 2.0.

2016-03-21T23:30:49.195Z cpu6:33307)WARNING: LinDMA: Linux_DMACheckConstraints:138: Cannot map machine address = 0x10f5b6b44, length = 2 for device 0000:00:1d.7; reason = address exceeds dma_mask (0xffffffff))

Persisting USB NIC Configurations after reboot

ESXi does not natively support USB NIC and upon a reboot, the USB NICs are not picked up until much later in the boot process which prevents them from being associated with VSS/VDS and their respective portgroups. To ensure things are connected properly after a reboot, you will need to add something like the following in /etc/rc.local.d/local.sh which re-links the USB NIC along with the individual portgroups as shown in the example below.

1
2
3
esxcfg-vswitch -L vusb0 vSwitch0
esxcfg-vswitch -M vusb0 -p "Management Network" vSwitch0
esxcfg-vswitch -M vusb0 -p "VM Network" vSwitch0

You will also need to run /sbin/auto-backup.sh to ensure the configuration changes are saved and then you can issue a reboot to verify that everything is working as expected.

Summary

For platforms that have limited built-in networking capabilities such as the Intel NUC and Apple Mac Mini, customers now have the ability to add additional network interfaces to these systems. This will now open up a whole new class of use cases for vSphere based home labs that were never possible before, especially with solutions such as VSAN and NSX. I look forward to seeing what our customers can now do with these new networking capabilities.

Additional Info

Here are some additional screenshots testing the dual USB 3.0 ethernet adapter as well as a basic iPerf benchmark for the single USB ethernet adapter. I was not really impressed with the speeds for the dual ethernet adapter which I had shared some more info here. Unless you are limited on number of USB 3.0 ports, I would probably recommend just sticking with the single port ethernet adapter.

usb3nic-5
usb3nic-6

iPerf benchmark for Ingress traffic (single port USB ethernet adapter):
usb3nic-7
iPerf benchmark for Egress traffic (single port USB ethernet adapter):
usb3nic-8

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Not Supported, vSphere 5.5, vSphere 6.0 Tagged With: esxi 5.5, esxi 6.0, homelab, lsusb, usb, usb ethernet adapter, usb network adapter

  • Page 1
  • Page 2
  • Next Page »

Primary Sidebar

Author

William Lam is a Staff Solutions Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).

  • GitHub
  • Google+
  • LinkedIn
  • RSS
  • Twitter

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy