• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

virtuallyGhetto

  • About
  • Privacy
  • VMware Cloud on AWS
  • Home Lab
  • Nested Virtualization
  • Automation
    • VMware Kickstart
    • VMware API/SDK/CLI
    • VMware vMA/VIMA
    • VMware OVF / OVFTOOL
  • Apple Mac
  • VCSA
  • VSAN

Workstation

Workarounds for deploying PhotonOS 2.0 on vSphere, Fusion & Workstation

11/07/2017 by William Lam 2 Comments

PhotonOS 2.0 was just released last week and it includes a number of exciting new enhancements which you can read more about here. Over the last few days, I had noticed quite a few folks having issues deploying the latest PhotonOS OVA, including myself. I figure I would share the current workarounds after reaching out to the PhotonOS team and seeing the number of questions both internally and externally.

Deploying PhotonOS 2.0 on vSphere

If you are deploying the latest OVA using either the vSphere Web (Flex/H5) Client on vCenter Server or the ESXi Embedded Host Client on ESXi, you will notice that the import fails with the following error message:

The specified object /photon-custom-hw13-2.0-304b817/nvram could not be found.


This apparently is a known issue with the vSphere Web/H5 Client bug with exported vHW13 Virtual Machines. As I understand it, the actual fix did not make it in the latest vSphere 6.5 Update 1 release, but it should be available in a future update. After reporting this issue to the PhotonOS team as I ran into this myself, the team quickly re-spun the vHW11 OVA (since that image also had a different issue) which can now be imported into a vSphere environment using any of the UI-based Clients and/or CLIs. For now, the workaround is to download PhotonOS 2.0 "OVA with virtual hardware v11" if you are using vSphere OR you can install PhotonOS using the ISO.

Deploying PhotonOS 2.0 to Fusion/Workstation

UPDATE (11/08/17) - The PhotonOS team just published an additional OVA specifically for Fusion/Workstation which uses LSI Logic storage adapter as PVSCSI is currently not supported today. You can easily import latest PhotonOS 2.0 without needing to tweak the OVF as mentioned in the steps below, simply download the OVA with virtual hardware v11(Workstation and Fusion) and import normally via UI or CLI.

If you are deploying either of the vHW11 or vHW13 OVA to Fusion/Workstation, you see the following error message:

Invalid target disk adapter type: pvscsi


The reason for this issue is that neither Fusion/Workstation currently support the PVSCSI storage adapter type which the latest PhotonOS OVA uses. In the meantime, a workaround is to edit the OVA to use the LSI Logic adapter instead of the PVSCSI. Below are the steps to convert the OVA to OVF and then apply the single line change.

Step 1 - Use OVFTool (included with both Fusion/Workstation) to convert the OVA to an OVF which will allow us to edit the file. To do so, run the following command:

ovftool --allowExtraConfig photon-custom-hw13-2.0-304b817.ova photon-custom-hw13-2.0-304b817.ovf

Step 2 - Open the photon-custom-hw13-2.0-304b817.ovf using a text editor like Visual Studio Code or VI and update the following line from:

1
<rasd:ResourceSubType>VirtualSCSI</rasd:ResourceSubType>

to

1
<rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>

and save the change.

Step 3 - Delete the OVF manifest file named photon-custom-hw13-2.0-304b817.mf since the contents of the file has been updated

Step 4 - You can now import the modified OVF. If you wish to get back the OVA, you can just re-run Step 1 and use the .ova extension to get back a single file

Upgrading from Photon 1.x to 2.0

I also noticed several folks were asking about upgrading from Photon 1.0 to 2.0, you can find the instructions below:

Step 1 - You may need to run the following if you have not done so in awhile:

tdnf distro-sync

Step 2 - Install the PhotonOS upgrade package by running the following command:

tdnf install photon-upgrade

Step 3 - Run the PhotonOS upgrade script and answer 'Y' to start the upgrade:

photon-upgrade.sh

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, OVFTool, vSphere, vSphere Web Client, Workstation Tagged With: fusion, Photon, vSphere, workstation

Tip from Engineering – Use UEFI firmware for Windows 10 & Server 2016

10/20/2017 by William Lam 13 Comments

Several weeks back I was chatting with a few of our Engineers from the Core Platform Team (vSphere) and they had shared an interesting tidbit which I thought I was worth mentioning to my readers. When creating a Virtual Machine in either vSphere or Fusion/Workstation, customers have the option to override the default and specify the specific Firmware boot option whether that is BIOS or UEFI.


Like most customers, I do not even bother touching this setting and I just assume the system defaults are sufficient. Interestingly, for Microsoft Windows 10 and Windows Server 2016, there are some important implications to be aware of on whether BIOS or UEFI is used. This is especially important since the default firmware type in vSphere for these OSes are BIOS.

[Read more...] about Tip from Engineering – Use UEFI firmware for Windows 10 & Server 2016

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Fusion, Security, vSphere 6.5, Workstation Tagged With: Credential Guard, Device Guard, fusion, Secure Boot, UEFI, vSphere 6.5, windows 10, windows 2016

Native OVF support for Fusion/Workstation 2017 Tech Preview 

07/18/2017 by William Lam 1 Comment

The VMware Fusion and Workstation team just released their 2017 Tech Preview releases and there is a ton of new and awesome capabilities which you can read more about here and here. One of the exciting new features, which I was very fortunate to have been involved with is finally here, native OVF property support! Although customers have had the ability to import OVF/OVAs for some time now, if they included OVF properties, they would be ignored and often times this would result in a failed deployment as those properties are required for the initial setup.

A great example of this is trying to run the vCenter Server Appliance (VCSA) on either Fusion or Workstation. Today, the only workaround is to manually edit the VMX file and supplying the correct OVF properties which I have blogged about here. With the latest TP release of Fusion/Workstation, when you import an OVF/OVA that contains OVF properties, the UI will automatically render the required information directly into the UI without needing users to manually touch the VMX files.

Here is a screenshot of deploying the latest VCSA 6.5d OVA (jump to bottom for some additional VCSA tidbits when deploying to Fusion/Workstation):

[Read more...] about Native OVF support for Fusion/Workstation 2017 Tech Preview 

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Apple, Automation, Fusion, OVFTool, VCSA, Workstation Tagged With: apple, fusion, ovf, ovftool, Tech Preview, vcenter server appliance, vcsa

How to deploy the vCenter Server Appliance (VCSA) 6.5 running on VMware Fusion & Workstation?

10/27/2016 by William Lam 30 Comments

As with any new release of vSphere, it is quite common for customers to deploy the new software in either a vSphere home or test lab to get more familiar with it. Although not everyone has access to a vSphere lab environment, the next best thing is to leverage either VMware Fusion or Workstation. With the upcoming release of vSphere 6.5, this is no different. In fact, during the vSphere Beta program, this was something that was asked about by several customers and something I had helped document as the process has changed from previous releases of the VCSA.

In vSphere 6.5, the VCSA deployment has changed from a "Single" monolithic stage where a user enters all of their information up front and the installer goes and deploys the VCSA OVA and then applies the configurations. If you had fat finger say a DNS entry or wanted to change the IP Address before applying the actual application configurations, it would not be possible and you would have to re-deploy which was not an ideal user experience.

In vSphere 6.5, the new UI installer will still allow you to perform a "Single" monolithic stage but it is now broken down into two distinct stages as shown below with their respective screenshots:

Stage 1 - Initial OVA deployment which includes basic networking

vcsa-6-5-installer-1
Stage 2 - Applying VCSA specific personality configuration

vcsa-6-5-installer-2
Just like in prior releases of the VCSA, the UI translates the user input into specific OVF properties which are then passed into the VCSA guest for configuration. This means that if you wish to deploy VCSA 6.5 running Fusion or Workstation, you will have two options to select from. You either deploy VCSA and complete both Stage 1 and 2 or just Stage 1 only. If you select the latter option, to complete the actual deployment, you will need to open a web browser to the VAMI UI (https://[VCSA-IP]:5480) and finish configuring the VCSA using the "Setup vCenter Server Appliance" option as shown in the screenshot below.

vcsa-6-5-installer-3
If your goal is to quickly get the VCSA 6.5 up and running, then going with Option 1 (Stage 1 & 2 Config) is the way to go. If your goal is to learn about the new VCSA UI Installer, then you can at least get a taste of that by going with Option 2 (Stage 1 Config) and this way you can step through Stage 2 using the native UI installer.

One last thing I would like to mention is that there have been a number of new services added to the VCSA 6.5. One example is that vSphere Update Manager (VUM) is now embedded in the VCSA and it is also enabled by default. With these new services, the tiniest deployment size is going to require 10GB of memory where as before it was 8GB. This is something to be aware of and ensure that you have adequate resources before attempting to deploy the VCSA or else you may see some unexpected failures while the system is being configured.

Note: If you have access to fast SSDs and would like to overcommit memory in Fusion or Workstation, you might be able to get this to work leveraging some tricks mentioned here. This is not something I have personally tested, so YMMV.

Here are the steps to deploy VCSA 6.5 using either VMware Fusion or Workstation:

Step 0 (Optional) - Familiarize yourself with setting up VCSA 6.0 was on Fusion/Workstation with this blog post which will be helpful for additional context.

Step 1 - Download & extract the VCSA 6.5 ISO

Step 2 - Import the VCSA OVA which will be located in vcsa/VMware-vCenter-Server-Appliance-6.5.0.5100-XXXXXX_OVF10.ova using either VMware Fusion or Workstation (you can either double click or just go to File->Open) but make sure you do NOT power it on after deployment. (this is very important)

Step 4 - Locate the directory in which the VCSA was deployed to and open up the VMX file and append one of the following options (make sure to change the IP information and passwords based on your environment):

Option 1 (Stage 1 & 2 Configuration):

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.pnid = "192.168.1.190"
guestinfo.cis.appliance.net.addr = "192.168.1.190"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "True"
guestinfo.cis.deployment.autoconfig = "True"
guestinfo.cis.appliance.ntp.servers = "pool.ntp.org"
guestinfo.cis.vmdir.password = "VMware1!"
guestinfo.cis.vmdir.site-name = "virtuallyGhetto"
guestinfo.cis.vmdir.domain-name = "vsphere.local"
guestinfo.cis.ceip_enabled = "False"

Option 2 (Stage 1 Only Configuration):

guestinfo.cis.deployment.node.type = "embedded"
guestinfo.cis.appliance.net.addr.family = "ipv4"
guestinfo.cis.appliance.net.mode = "static"
guestinfo.cis.appliance.net.pnid = "192.168.1.190"
guestinfo.cis.appliance.net.addr = "192.168.1.190"
guestinfo.cis.appliance.net.prefix = "24"
guestinfo.cis.appliance.net.gateway = "192.168.1.1"
guestinfo.cis.appliance.net.dns.servers = "192.168.1.1"
guestinfo.cis.appliance.root.passwd = "VMware1!"
guestinfo.cis.appliance.ssh.enabled = "True"
guestinfo.cis.deployment.autoconfig = "False"
guestinfo.cis.ceip_enabled = "False"

Step 5 - Once you have saved your changes, go ahead and power on the VCSA. At this point, the guestinfo properties that you just added will be read in by VMware Tools as the VCSA is booting up and the configuration will begin. Depending on the speed of your hardware, this can potentially take up to 15min+ as I have seen it. Please be patient with the process. If you wish to check the progress of the deployment, you can open a browser to https://[VC-IP]:5480 and you should see some progress or you can periodically connect to the Hostname/IP Address and once it is done, you should be taken to the vCenter Server's main landing page.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: Fusion, Home Lab, VCSA, vSphere 6.5, Workstation Tagged With: fusion, vcenter server appliance, vcsa, VCSA 6.5, vcva, vSphere 6.5, workstation

VM serial logging to the rescue for capturing Nested ESXi PSOD

03/21/2016 by William Lam Leave a Comment

I frequently deploy pre-releases of our software to help test and provide early feedback to our Engineering teams. One piece of software that I deploy some what frequently is our ESXi Hypervisor and the best way to deploy it, is of course inside of a Virtual Machine or commonly referred to as Nested ESXi.

Most recently while testing a new ESXi build in my lab (screenshot below is for demo purposes, not the actual PSOD image), I encountered an ESXi purple screen of death (PSOD) during the bootup of the ESXi Installer itself. Since ESXi had not been installed, there was no place for ESXi to actually store the core dumps which made it challenging when filing a bug with Engineering as screenshots may not always contain all the necessary details.

Screen Shot 2016-03-21 at 9.26.08 AM
Luckily, because we are running in a VM, a really neat feature that VMware has supported for quite some time now is configuring a virtual serial port for logging purposes. In fact, one of the neatest feature from a troubleshooting standpoint was the introduction of the Virtual Serial Port Concentrator (vSPC) feature in vSphere 5.0 which allowed a VM to log directly to a serial console server just like you would for physical servers. You of course had few other options of either logging directly to the serial port of the physical ESXi, named pipe or simply to a file that lived on a vSphere Datastore.

Given this was a home lab setup, the easiest method was to simply output to a file. To add a virtual serial port, you can either use the vSphere Web/C# Client or the vSphere APIs. Since this is not something I need to do often, I just used the UI. Below is a screenshot using the vSphere Web Client and once you have added the virtual serial port, you need to specify the filename and where to the store the output file by clicking on the "Browse" button.

vm-serial-logging
If the GuestOS which includes ESXi has been configured to output to a serial port, the next time there is an issue and you can easily captured the output to a file instead of just relying on a screenshot. One additional tip which might be useful is by default, vSphere will prompt whether you want to replace or append to the configured output file. If you wish to always replace, you can add the following VM Advanced Setting and you will not get prompted in the UI.

answer.msg.serial.file.open = "Replace"

Virtual serial ports are supported on both vSphere (vCenter Server + ESXi) as well as our hosted products VMware Fusion and Workstation.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, Nested Virtualization, Workstation Tagged With: esxi, fusion, nested, nested virtualization, psod, serial logging, vSphere, workstation

Deploying Nested ESXi is even easier now with the ESXi Virtual Appliance

12/14/2015 by William Lam 90 Comments

Several months back I had built an ESXi Virtual Appliance that allows anyone to quickly stand up a fully functional Nested ESXi VM which includes guest customization such as networking, NTP, syslog, passwords, etc. The virtual appliance was initially built for my own personal use as I found myself constantly rebuilding my lab environment for evaluating and breaking new VMware software. I figured if this was useful for myself, it probably could benefit others at VMware and I posted the details internally on our Socialcast forum. Since then, I have received numerous stories on how helpful the ESXi Virtual Appliance has been for both our Field and Engineering for setting up demos, POCs, evaluations, etc.

Most recently, I was contacted by Massimo Re Ferre' (crazy Mainframe guy ;)) who works over in our Cloud Native Apps team and was working on a pretty cool project with Photon Controller which was recently open sourced. He was interested in leveraging the ESXi Virtual Appliance along with using VMware AppCatalyst to make it super simple for anyone to try out the Photon Controller in their own environment. Over the last couple of weeks, I have been working closely with Massimo on incorporating on his requirements for the Photon Controller POC back into my ESXi Virtual Appliance. My original goal for the appliance was to keep it generic so that it could cater to multiple use cases and Photon Controller POC was just another neat solution that could be built on top of.

I just found out today that new Photon Controller POC has just been released and you can find more details in the links below:

  • Photon Controller main page
  • Getting started guide for deploying Photon Controller on OS X
  • Getting started guide for deploying Photon Controller on Windows
  • Photon Controller Google Group

As part of the release, the ESXi Virtual Appliance is also made avialable which I thought was pretty cool! 😀 I highly recommend you check out the awesome work done by Massimo if you want to play with Photon Controller. This is a really easy way of getting started with Photon Controller and giving it a spin in your own environment.

Since the ESXi Virtual Appliance is now available externally, I wanted to share a few details about the appliance for those who might be interested in checking it out. As I mentioned earlier, the goal of the ESXi Virtual Appliance was to be generic and to be used as a build block that could enable different use cases such as spinning up a quick vSphere lab using it and the VCSA or putting together a fully functional VSAN lab in literally a couple of minutes (at the very bottom, I have a couple of PowerCLI scripts to demonstrate this). You could deploy 3 instances of the appliance to get a basic 3 Node VSAN Cluster or you could scale up to 64 Node VSAN Cluster all within just minutes. The limit is truly your imagination.

The appliance contains a pre-installed GA release of ESXi 6.0 Update 1. There are 11 OVF properties that are available for customizing the Nested ESXi VM which are shown in the table below. Once powered on, the default 60 day evaluation will start counting down as if you had manually installed ESXi yourself. In addition, the OVA also contains several optimizations for running Nested ESXi including the Mac Learn dvFilter params as well as other configurations for quickly setting up a VSAN environment which are also described below. I have also built the appliance to be easily consume in all VMware based environments including vSphere, vCloud Air, Fusion, Workstation, Player & AppCatalyst.

UPDATE (05/10/17) - Updated VA to latest ESXi 6.0u3 & 6.5d (vSAN 6.6)

  • ESXi 6.0 Update 3 Virtual Appliance download link
  • ESXi 6.5d Virtual Appliance download link

UPDATE (05/09/17) - The ESXi 5.5u3 has been decommissioned due to its limited use.

  • ESXi 5.5 Virtual Appliance download link (Decommissioned)

UPDATE (11/18/16) - ESXi 6.5 Virtual Appliance has been released and you can find the details here.

UPDATE (04/07/16) - Minor bug fix with refreshing auto-generated SSL Certificates, latest version is v5 for 6.0u2 VA and v2 for 5.5u3 VA

UPDATE (03/18/16) - I updated the ESXi 6.0 VA using vSphere 6.0 Update 2. It is now back to vHW10 for backwards compat and includes an All-Flash configuration + 2 VMXNET3 adapters and ready for VSAN 6.2 😀

UPDATE (03/01/16) - I have also created a new ESXi 5.5 VA using vSphere 5.5 Update 3b

OVF Property Description Type
guestinfo.hostname FQDN of the ESXi host string
guestinfo.ipaddress IP Address string
guestinfo.vlan VLAN ID string
guestinfo.netmask Netmask string
guestinfo.gateway Gateway string
guestinfo.dns DNS Server string
guestinfo.domain DNS Domain string
guestinfo.ntp NTP Server string
guestinfo.ssh  Whether or not SSH is enabled boolean
guestinfo.syslog Syslog Server string
guestinfo.password Root password for ESXi host string
guestinfo.createvmfs Whether to automatically creates a VMFS datastore (datastore1) on largest VMDK boolean

The ESXi 6.x Virtual Appliance includes the following configuration:

  • ESXi 6.0 Update 2
  • GuestType: ESXi 5.x (backwards compat)
  • vHW 10
  • 2 vCPU
  • 6GB vMEM
  • 2 x vmxnet3 vNIC
  • 1 x 2GB HDD (ESXi Installation)
  • 1 x 4GB SSD (for use w/VSAN, empty by default)
  • 1 x 8GB SSD (for use w/VSAN, empty by default)
  • VHV added (more info here)
  • dvFilter Mac Learn VMX params added (more info here)
  • disk.enableUUID VMX param added
  • VSAN traffic tagged on vmk0
  • Disabled VSAN device monitoring for home labs (more info here)

The ESXi 5.x Virtual Appliance includes the following configuration:

  • ESXi 5.5 Update 3b
  • GuestType: ESXi 5.x
  • vHW 10
  • 2 vCPU
  • 6GB vMEM
  • 2 x vmxnet3 vNIC
  • 1 x 2GB HDD (ESXi Installation)
  • 1 x 4GB SSD (for use w/VSAN, empty by default)
  • 1 x 8GB HDD (for use w/VSAN, empty by default)
  • VHV added (more info here)
  • dvFilter Mac Learn VMX params added (more info here)
  • disk.enableUUID VMX param added
  • VSAN traffic tagged on vmk0
  • Disabled VSAN device monitoring for home labs (more info here)
  • ESXi Embedded Host Client (more info here)

If you do not wish to use VSAN, there is an OVF property that allows you to specify whether or not a default VMFS datastore is created. You can increase the capacity of any of the disks after deployment (if you wish for the automatically VMFS creation, you will need to expand the disk prior to powering on the VM). Below are the different methods in which you can deploy the ESXi Virtual Appliance which includes vSphere, vCloud Air, Fusion, Workstation, Player & AppCatalyst. The idea is that you can easily setup Nested ESXi on any VMware based hypervisor and be up and running in just minutes!

Option 1 - Deploy to vSphere environment w/vCenter Server or vCloud Air

Download the OVA (or you can just paste the URL link) and import that into your vCenter Server using the vSphere Web Client. Make sure to accept the "Extra Configuration" when prompted and then fill out the 11 OVF properties which will allow you to customize the Nested ESXi VM to your environment. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM. Once powered on, the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM. Below are a couple of screenshots

Screen Shot 2015-12-10 at 1.35.02 PM
Screen Shot 2015-12-10 at 1.36.04 PM
Screen Shot 2015-12-10 at 1.38.51 PM

Option 2 - Deploy to ESXi

Download the OVA and import it into ESXi using either the vSphere C# Client but do not power it on. Since ESXi does not support OVF properties, you will need to add the following guestinfo*. properties as shown in "Option 3" below by using the VM Advanced Settings UI.

Screen Shot 2015-12-11 at 6.48.51 AM
If you prefer NOT to mess around with manually adding these VM Advanced Settings which can also be automated using the vSphere API or PowerCLI, one additional method which CAN make use of the OVF properties is by using ovftool which is a CLI to import the OVA and using the --injectOvfEnv option that was added in ovftool version 4.x You can find more details in this blog post here.

Option 3 - Deploy to VMware Workstation, Fusion or Player

Download the OVA and import it into Fusion/Workstation but do not power it on. You will then need to edit the VMX file and add the following guestinfo.* properties as shown below since Fusion/Workstation do not support OVF properties. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM. Once you have saved the changes, you can then power on the Nested ESXi VM and the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM.

1
2
3
4
5
6
7
8
9
10
11
guestinfo.hostname = "vsan-1.primp-industries.com"
guestinfo.ipaddress = "172.16.78.90"
guestinfo.netmask = "255.255.255.0"
guestinfo.gateway = "172.16.78.1"
guestinfo.dns = "172.16.78.1"
guestinfo.domain = "primp-industries.com"
guestinfo.ntp = "172.16.78.1"
guestinfo.ssh = "True"
guestinfo.syslog = "192.168.1.100"
guestinfo.password = "VMware1!"
guestinfo.createvmfs = "False"

Option 4 - Deploy to VMware AppCatalyst

Download the the OVA and import that into AppCatalyst using ovftool but do not power it on. You will then need to edit the VMX file and add the following guestinfo.* properties as in the above example for Workstation/Fusion in addition to the following params listed below which are required for Nested ESXi to run in AppCatalyst. If you wish to increase the VMDK capacity and automatically have a VMFS datastore created for you automatically, be sure to expand the VMDK prior to powering on the VM.

1
2
3
guestos = "vmkernel6"
virtualhw.version = "11"
svga.vgaOnly = "true"

You will also need to run the following command to allow promiscuous mode in AppCatalyst since there's no UI to prompt (this is only required once):

touch "/Library/Preferences/VMware AppCatalyst/promiscAuthorized"

Once you have saved the changes, you can then power on the Nested ESXi VM and the customization process will start and in a few minutes you should have a fully functional Nested ESXi VM.

Option 5 - Deploy to vSphere using PowerCLI

I have also created two very simple PowerCLI scripts which demonstrates how you can easily deploy N-number of these Nested ESXi VMs and in fact, you can setup a fully functional VSAN Cluster in just under 5 minutes! You can find the two scripts below:

  • deploy_esxi_vsan_appliance.ps1
  • add_esxi_vsan_appliance_to_cluster.ps1

The first script will perform the deployment of the Nested ESXi VM, but you will first need to convert the OVA to an OVF because the Get-OvfConfiguration cmdlet does not support OVA. To properly convert the OVA, you will need ovftool and then run the following command:

ovftool.exe --allowAllExtraConfig --skipManifestCheck Nested_ESXi_Appliance.ova Nested_ESXi_Appliance.ovf

The second script will go ahead and add the deployed Nested ESXi VMs to a vSphere Cluster and then enable VSAN on the vSphere Cluster.

Share this...
  • Twitter
  • Facebook
  • Linkedin
  • Reddit
  • Pinterest

Filed Under: ESXi, Fusion, Home Lab, Nested Virtualization, Not Supported, vSphere, vSphere 6.0, vSphere 6.5, Workstation Tagged With: esxi, nested, nested virtualization, ova, vSphere 6.0 Update 1, vSphere 6.5

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Author

William Lam is a Staff Solution Architect working in the VMware Cloud on AWS team within the Cloud Platform Business Unit (CPBU) at VMware. He focuses on Automation, Integration and Operation of the VMware Software Defined Datacenter (SDDC).

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Sponsors

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy