• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

VCSA 6.5

Can the VCSA 6.5 forward to multiple syslog targets?

12/11/2017 by William Lam 2 Comments

I had a couple folks ping me recently asking whether the latest vCenter Server Appliance (VCSA) 6.5 release supports forwarding to multiple syslog targets? Currently today, only a single syslog target is officially supported which can be configured using the VAMI UI. I know this is something our customers have been asking about and I know this is something the VC Engineering team is considering.

Having said that, it is possible to configure additional syslog targets on the VCSA, but please be aware this is not officially supported. A couple of these customers understood the support impact and were still interested in a solution as some of their environments mandated multiple redundant syslog targets and using a syslog forwarder/relay was not an option for them.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

When configuring syslog forwarding from the VAMI UI, the configurations are all written to /etc/vmware-syslog/syslog.conf on the VCSA.

With this information, if we want to add additional targets (which can be of the same configuration or different), you simply append additional targets to the syslog configuration file. For example, if I have two syslog targets 192.168.30.110 and 192.168.30.111 and I wish to use the default log level, TCP and 514, I would use the following:

1
2
*.* @@192.168.30.110:514;RSYSLOG_SyslogProtocol23Format
*.* @@192.168.30.111:514;RSYSLOG_SyslogProtocol23Format

Once you have saved your changes, you will need to restart the rsyslog service for the change to go into effect. To do so, run the following two commands on the VCSA:

systemctl stop rsyslog
systemctl start rsyslog

One additional thing to note is that the VAMI UI will only show the very last syslog target within the configuration file but if you monitor syslog servers, you will see that logs are indeed being forward to all servers that have been configured in the syslog configuration file.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, Not Supported, VCSA Tagged With: rsyslog, syslog, VCSA 6.5

vGhetto Automated NSX-T 2.0 Lab Deployment

10/24/2017 by William Lam 21 Comments

Last week, I had spent some time exploring and getting myself more familiar with NSX-T, which is the next generation release of the NSX platform from VMware. One of the first thing I do when learning about a new product is to setup a lab environment that I can using. Having gone through the deployment once by hand, I realized it would be quite painful if I needed to do this again, which I know I will and I did 🙂 I wanted to have a simliar experience to my vGhetto Automated vSphere Lab deployment script which also including setting up the entire vSphere infrastructure along with deploying and configuring NSX-V and extending it to support NSX-T.

Since my original script leverages PowerCLI to access both the vSphere and NSX APIs, I wanted to do the same with NSX-T. Funny enough, the PowerCLI team had just published an update release (6.5.3) which also added support for NSX-T and I thought this was perfect timing to try out the NSX-T APIs, which I had never used before.

UPDATE (01/01/2018) - I have verified the script also works with the latest NSX-T 2.1 which was just released before Christmas. The script has also been updated to create a new Edge Uplink Profile along with an Edge Cluster and automatically associate all Edge VMs to Edge Cluster.

I have created a new Github repository called vghetto-nsxt-automated-lab-deployment which contains detailed instructions along with the PowerCLI script.

Here is what the script is currently performing:

  1. Deploy and configure vCenter Server Appliance 6.5u1
  2. Deploy and configure 3 x Nested ESXi 6.5u1 Virtual Appliance VMs and attaching it to vCenter Server
  3. Deploy NSX-T Manager, 3 x Controllers & 1 x Edge and setup both the Management and Control Cluster Plane
  4. Configure NSX-T with IP Pool, Transport Zone, Add vCenter Server as Compute Manager, Create Logical Switch, Prepare ESXi hosts, Create Uplink Profile & Add configure ESXi hosts as a Transport Node

Similiar to the vSphere version of this script, all deployed VMs will be placed inside of a vCenter vApp construct as shown in the example screenshot below:


Here is an example output of a succesful deployment and you go from nothing to a fully functional NSX-T environment in just 50 minutes, which is pretty awesome if you ask me!?

[Read more...] about vGhetto Automated NSX-T 2.0 Lab Deployment

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXCLI, Home Lab, NSX, PowerCLI, VCSA, vSphere 6.5 Tagged With: esxi 6.5, NSX-T, PowerCLI, VCSA 6.5, vSphere 6.5 Update 1

Introducing Alexa to a few more VMware APIs

06/12/2017 by William Lam 3 Comments

Over the weekend, while taking a break from putting together some furniture as it was my time for my daughters nap, I got that the chance to explore and create a new Alexa Skill which integrates with a few of VMware's APIs. This has been something I wanted to try out for some time but have not had any spare time. I had even purchased an Amazon Echo Dot but its just currently being used as a music player for the family. A couple of weeks back I saw an awesome blog post from Cody De Arkland where he demonstrates how to easily integrate the new vCenter Server 6.5 REST APIs into an Alexa Skill which can then be consumed using an Amazon Echo device.

Cody's write-up was fantastic and I was able to get everything up and running in about 20-25minutes with a few minor trial/error. It was great to see how easy it was for a non-developer like Cody to easily consume the new vCenter Server REST APIs which includes basic VM Management as well access to the VMware Management Appliance Interface or VAMI for short. Given Cody already did the hard work to create the initial Alexa integration, I figure it might be cool to extend his work and introduce Alexa to a few more VMware's APIs including the traditional vSphere API (SOAP) and the new vSAN Management API.

UPDATE (06/15/17) - Just added support for PowerCLI, it was a little tricky as Flask app is written in Python and so poor man workaround was to call Powershell/PowerCLI using subprocess.

Since Cody's integration module was written using Python, it was pretty simple to add support for both pyvmomi (vSphere SDK for Python) and vSAN Management SDK. To install pyvmomi, you can simply run

pip3 install pyvmomi

and for installing vSAN Management SDK, have a look at this blog post here.

Here is a quick video that I had recorded which demonstrates the use of both the vSphere API and vSAN Management API using my Amazon Echo.

You can find all my changes in this forked repo lamw/alexavsphereskill and make sure to follow Cody's blog post here for instructions on how to get setup. For those wondering if Cody will be publishing an Alexa Skill for general consumption, I know he is working on some awesome updates to make it even easier to consume. Here is a sneak peak at just some of the recent updates that Cody is working on ...

A little @VMwareClarity UI action going on with the @amazonecho & @VMware skill this weekend in the lab. So easy to work with! @vmwarecode pic.twitter.com/0iXMbU6Acz

— Cody De Arkland (@Codydearkland) June 12, 2017

Stay tuned on this blog and Github repo for future updates!

One thing to note which I was not aware of until Cody mentioned it, is that once your Alexa Skill is built, you can directly access it from your own personal Amazon Echo without needing to publish it. You need to activate the Alexa Skill by saying "Alexa Start [APP-NAME]" where name is the name used in the "Invocation Name" field as shown in the screenshot below when setting up your Alexa Skill. I should also mention that if you decide to change the Alexa Skill name itself, which I had initially done and called it "vGhetto Control", make sure you update the Flask App name in __init__.py to the same name (spaces are converted to underscores) or you will run into issues.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, VAMI, VCSA, VSAN, vSphere Tagged With: Alexa, Flask, pyVmomi, REST API, vcenter server appliance, VCSA 6.5, VSAN, vSphere API

Automating the new native VCSA bootstrap “Easy Install” in vSAN 6.6

05/16/2017 by William Lam 10 Comments

In case you missed the previous article, have a read here which goes into greater detail behind the new VCSA bootstrap installer (also known as vSAN Easy Install) which is part of the new vSAN 6.6 release. As I hinted at the end of the previous post, customers not only have a simplified way of bootstrapping the VCSA on vSAN from a UI standpoint but they can also completely automate this leveraging some of the new vSAN Management 6.6 APIs, which are the same APIs that the UI uses.

A new Managed Object called VsanVcsaDeployerSystem is now available when connecting to either a standalone ESXi host as well as a vCenter Server. It contains the following three methods:

  • VsanPostConfigForVcsa() - Used to bootstrap the vSAN datastore on the ESXi host
  • VsanPrepareVsanForVcsa() - Used to setup the vCenter Server once it is deployed
  • VsanVcsaGetBootstrapProgress() - Used to retrieve progress from the two methods above

Here is the workflow for automating the VCSA bootstrap installer:

Step 1 - Connect directly to the ESXi host which you wish to bootstrap vSAN. You will use the VsanPrepareVsanForVcsa() API which accepts a list of disks for either a Hybrid or All-Flash vSAN datastore.

Step 2 - Deploy the VCSA like you normally would using the CLI Installer. You will specify the ESXi host that you had just prepared in Step 1 which includes the vSAN Datastore that was setup as part of that process.

Step 3 - Once the VCSA has been successfully deployed, you will connect to the vCenter Server and use the VsanPostConfigForVcsa() API which will create a vSphere Datacenter, vSphere Cluster and enable it with vSAN (which can also include Dedupe/Compression if you are using an All-Flash setup) and then automatically add the ESXi host that you had just bootstrapped. If you have provisioned other ESXi hosts that have not been configured with vSAN, you can also include that into the API request. The really nice thing about this "post" API is that rather than having to call into several existing vSphere APIs to setup vCenter Server, you can do all of that just using this single API!

To help demonstrate the use of the these new vSAN Management APIs, I have created a simple Python script which exercises these new APIs called vsan-vcsa-deployer-sample.py The script supports three operations: listdisk, prepare and post.

Here is an example of running the listdisk operation which will list all available disks that are currently not in use and can be used by vSAN:

python vsan-vcsa-deployer-sample.py -s 192.168.1.100 -u root -p VMware1! --operation listdisk

Once you have the disks information, you can then use the prepare operation as shown below to bootstrap your ESXi host:

python vsan-vcsa-deployer-sample.py -s 192.168.1.100 -u root -p VMware1! --operation prepare --cache "SAMSUNG MZVPV128" --capacity "Samsung SSD 850"


At this point, you are now ready to deploy the VCSA using the CLI Installer. Once that has completed, you can complete the process by using the post operation and provide the required parameters to setup vCenter Server including the ESXi host that you had just bootstrapped so it can be added to the vCenter Server inventory as shown below:

python vsan-vcsa-deployer-sample.py -s 192.168.1.200 -u '*protected email*' -p VMware1! --operation post --datacenterName "VSAN-Datacenter" --clusterName "VSAN-Cluster" --esxName 192.168.1.100 --esxUsername root --esxPassword VMware1!


Once the post operation has completed, you will have a fully configured vCenter Server which you can check by logging into the vSphere Web Client. Pretty slick, if you ask me!

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, VCSA, VSAN, vSphere 6.5 Tagged With: vcenter server appliance, VCSA 6.5, VSAN 6.6, vSphere 6.5, vSphere API

Project USB to SDDC – Part 3

05/11/2017 by William Lam 30 Comments

OK, the wait is finally over! In this final article, we will now walk through the process of getting access to this project as well as how to get this deployed in your own environment. For those that just want to see the code, you can find it at the Github project below:

Github Project: https://github.com/lamw/usb-to-sddc

Below are the details outlining the environment and software requirements as well as the instructions to consume this in your own home lab environment. The content below is a subset of what is published on the Github project, but this should get you going. For more details, please refer to the Github project and if you have any issues/questions, feel free to file a Github issue.

Environment Requirements:

  • USB key that is at least 6GB in capacity
  • Access to either macOS or Linux system as the script that creates the USB key is only supported on these two platforms
  • No additional USB keys must be plugged into the hardware system other than the primary installer USB key
  • Hardware system must have at least 2 disk drives which can either be 1xHDD and 1xSSD for running Hybrid vSAN OR 2xSSD for running All-Flash vSAN
  • Both Intel NUC 6th Gen and Supermicro E200-8D and E300-8D have been tested with this solution. It should work with other hardware systems that meet the minimum requirements but YMMV

Software Requirements:

  • ESXi 6.5a - VMware-VMvisor-Installer-201701001-4887370.x86_64.iso
  • VCSA 6.5b - VMware-VCSA-all-6.5.0-5178943.iso
  • DeployVM.zip
  • UNetbootin (Required for Mac OS X users)

Note: Other ESXi / VCSA 6.5.x versions can also be substituted, this includes the latest ESXi 6.5d (vSAN 6.6) release which I have also verified myself.

UPDATE (04/17/18) - No changes are required to get vSphere 6.7 to work, the only minor thing to be aware of is that the vSphere Web Client customization has changed in 6.7 and so you need to set VCSA_WEBCLIENT_THEME_NAME="" as empty string or you will find that the UI will not load unless you delete the customization directory in the VCSA that was pulled down automatically.

[Read more...] about Project USB to SDDC – Part 3

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, Home Lab, VCSA, VSAN, vSphere 6.5 Tagged With: Docker, esxi 6.5, Photon, usb, VCSA 6.5, VSAN, vSphere 6.5

Project USB to SDDC – Part 2

04/13/2017 by William Lam 1 Comment

In the previous article, I provided some background on the origin of the project. In this article, we will now focus on the technical details and how the solution actually works.

Hardware

This solution was originally developed against an Intel NUC but I had designed it to be generic so that it could run on any system which meets the minimum requirements which is just having two disks (HDD & SSD or two SSDs) which is used to create a vSAN datastore.

Here is the BOM for the Intel NUC that we had used:

  • 1 x Intel NUC 6th Gen NUC6i3SYH (supports 2 drives: M.2 & 2.5)
  • 2 x Crucial 16GB DDR4
  • 1 x Samsung SM951 NVMe 128GB M.2 for "Caching" Tier
  • 1 x Samsung 850 EVO 500GB 2.5 SATA3 for “Capacity” Tier

During the Sydney VMUG, we had did a live demo using an Intel NUC. Prior to the Melbourne VMUG, fellow VMware colleague Tai Ratcliff reached out and offered to let us borrow his Supermicro kit for the demo which was great as the hardware was much beefier than the NUC. Thanks Tai!


I had already been hearing great things about E200-8D platform but I had not had the opportunity to get my hands on the system to play with. After only spending a little bit of time with the platform while prepping for the VMUG event, I can see why is a pretty slick system for a vSphere/vSAN based home lab, especially if you need to go beyond 32GB of memory which is where the Intel NUCs currently max out at.

The other appealing features for this platform is that it comes with 2x10GbE, 2x1GBe and an IPMI interface for remote management which is a huge benefit for not needing to connect an external monitor and keyboard. The system is also Xeon based w/6-Cores and can go all the way up to 128GB of memory. Tai had also recently published a blog article comparing the Supermicro E200-8D and the Intel NUC, which I think is worth a read if you are deciding between these two platforms.

Note: If you are considering purchasing the Supermicro E200-8D or any other system for that matter, check out this exclusive vGhetto discount here.

[Read more...] about Project USB to SDDC – Part 2

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Automation, ESXi, Home Lab, VCSA, VSAN, vSphere 6.5 Tagged With: Docker, Photon, usb, VCSA 6.5, VSAN, vSphere 6.5

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Services Business Unit (CSBU) at VMware. He focuses on Automation, Integration and Operation for the VMware Cloud Software Defined Datacenters (SDDC)

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy