5 ways to a run PowerCLI script using the PowerCLI Docker Container

In case you missed the exciting update last week, the PowerCLI Core Docker Container is now hosted on Docker Hub. With just two simple commands you can now quickly spin up a PowerCLI environment in under a minute! This is really useful if you need perform a couple of operations using the cmdlets interactively and then discarding the environment once you are done. If you want to do something more advanced like run an existing PowerCLI script as well as potentially persist its output (Docker Containers are stateless by default), then there are few options to consider.

To better describe the options, lets use the following scenario. Say you have a Docker Host, this can be a VMware's Photon OS or a Microsoft Windows, Linux or Mac OS X system which has the Docker Client running. The Docker Host is where you will run the PowerCLI Core Docker Container and it also has access to a collection of PowerCLI scripts that you have created or downloaded else where. Lets say the location of these PowerCLI scripts are located in /Users/lamw/scripts and you would like them to be available within the PowerCLI Core Docker Container when it is launched, say under /tmp/scripts.

Here is a quick diagram illustrating the scenario we had just discussed.

4-different-ways-to-use-powercli-core-docker-containerHere are 5 different ways in which you can run your PowerCLI scripts within the Docker Container. Each will have its pros/cons and I will be using real sample scripts to exercise each of the options. You can download all the sample scripts in my Github repository: powerclicore-docker-container-samples

Note: Before getting started, please familiarize yourself with launching the PowerCLI Core Docker Container which you can read more about here. In addition, you will need access to either a vCenter Server or ESXi host environment and also please create a tiny "Dummy" VM called DummyVM which we will be using to update its Notes field with the current time.

Option 1:

This is the most basic and easiest method. You literally run a PowerCLI script that already contains all of the necessary information hardcoded within the script itself. This means things like credentials as well as user input that is required can be found within the script. This is obviously simple but makes it very inflexible as you would need to edit the script before launching the container. Another downside is that you now have your vSphere credentials hardcoded inside of the script which is also not ideal from a security standpoint.

To exercise example 1, please edit the pcli_core_docker_sample1.ps1 script and update it with your environment credentials and then run the following command:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample1.ps1

If executed correctly, the Docker container should launch, connect to your vSphere environment, update the notes field of DummyVM with the current time and then exit. Pretty straight forward and below is a screenshot of this example.


Option 2:

Nobody likes hardcoding values, especially when it comes to endpoints and credentials. This next method will allow us to pass in variables from the Docker command-line and make them available to the PowerCLI scripts inside of the container as OS environmental variables. This allows for greater flexibility then the previous option but the downside is that you may potentially be exposing credentials in plaintext which can be inspected by others who can perform docker run/inspect commands. You also need to update your existing PowerCLI scripts to handle the environmental variable translation which may not be ideal if you have a lot of pre-existing scripts.

To exercise example 2, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-e VI_USERNAME=administrator@vghetto.local \
-e VI_PASSWORD=VMware1! \
-e VI_VM=DummyVM \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample2.ps1

If executed correctly, you will see that the variables that we have defined are passed into the container and we are now able to make use of them within the PowerCLI script by simply accessing the respective environmental variable names as shown in the screenshot below.


Option 3:

If you have created some PowerCLI scripts which already prompt for user input which can include also include credentials, then another way to run those script is to do so interactively. If the parameters are required for a given script, then it should prompt for input. The benefit here is that you can reuse your existing PowerCLI scripts without needing to make any modifications even when executing it within a Docker container. You are also not exposing any credentials in plaintext. To take this step further, you could also implement the secure string feature in PowerShell but that would still require you to include a small snippet in your PowerCLI script to do the appropriate decoding when connecting.

To exercise example 3, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample3.ps1

If executed correctly, you will be prompted for the expected user inputs to the script and then it will perform the operation as shown in the screenshot below.


Option 4:

Similiar to Option 3, if you have defined parameters to your PowerCLI script, you can also just specify them directly in the Docker command-line just like you would if you were to manually run the PowerCLI script in a Windows environment. Again, the benefit here is that you can reuse your existing PowerCLI scripts without any modifications. You do risk exposing any credentials if you are passing it through the command-line, but the risk was known as you are already doing that with your existing scripts. A downside to this option is if your PowerCLI script accepts quite a few parameters, your Docker run command can get quite long. You may just consider prompting for endpoint/credentials and the rest of the user input can then be passed in dynamically if you were to go with this option.

To exercise example 4, run the following command and specify your environmental credentials in the command-line instead:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample3.ps1 -VI_SERVER -VI_USERNAME administrator@vghetto.local -VI_PASSWORD VMware1! -VI_VM DummyVM


Option 5:

The last option is a nice compromise of the above in which you can continue leveraging your existing scripts but providing a better way of sending in things like credentials. As I mentioned before, Docker Volumes allows us to make directories and files available from our Docker Host to the Docker Container. This not only allows us to make our PowerCLI scripts available from within the container but it can also be used to provide access to other things like simply sourcing a credentials file. This method works on a per-individual basis running the container without any major modification to your existing scripts, you simply just need to source the credential file at the top of each script. Best of all, you are not exposing any sensitive information

Note: Some of you might be thinking about PowerCLI's credential store and seeing how that might be a better solution but currently today that has not been implemented yet in PowerCLI Core which is really leveraging Microsoft's credential store feature. Once that has been implemented in .NET Core, I am sure the PowerCLI team can then add that capability which is probably the recommended and preferred option both from a security perspective as well as Automation standpoint.

To exercise example 5, edit the credential.ps1 file and update it with your environmental credentials and run the following command:

docker run --rm -it \
-v /Users/lamw/scripts:/tmp/scripts vmware/powerclicore /tmp/scripts/pcli_core_docker_sample4.ps1 -VI_VM DummyVM

If executed correctly, the same variables in the credentials file will then be loaded into the PowerCLI script context and run the associated operations and exit.

As you can see, there are many different ways in which you can run your existing PowerCLI scripts using the new PowerCLI Core Docker Container. Hopefully this article gives you a good summary along with some real world examples to consider. Given this is still an active area of development by the PowerCLI team, if you have any feedback or suggestions, please do leave a comment. I know the Alan (PM) as well as the engineers are very interested in hearing your feedback and seeing how else we could better improve the user experience of both PowerCLI Core as well as consuming PowerCLI Core through these various interfaces.

UPDATE (10/25/16) - It looks like PowerCLI Core Docker Container has been updated with my suggestion below, so you no longer need to specify the --entrypoint parameter 🙂

One finale note, right now the PowerCLI Core Docker Container does not automatically startup the Powershell process when it is launched. This is why we have the --entrypoint='/usr/bin/powershell' command appended to the Docker command-line. If you prefer to have Powershell start up which will automatically load the PowerCLI module, you can check out my updated PowerCLI Core Docker Container: lamw/powerclicore which uses the original as a base with one tiny modification. Perhaps this is something Alan and the team would consider making as a default in the future? 🙂

How to run a Docker Container on the vCenter Server Appliance (VCSA) 6.5?

One of the most notable changes in the vCenter Server Appliance (VCSA) in vSphere 6.5 is a switch of the underlying OS from SLES to VMware's very own Photon OS. With this change, VMware will now own the entire software stack within the VCSA (OS + Application). This will allow VMware to quickly respond and deliver OS and security updates to customers at a much quicker rate than it was possible before.

During my testing of the VCSA, I had a need to spin up a Docker Container. Given that the VCSA is now Photon OS based, this should be a pretty trivial thing to enable as it is with a standalone installation of Photon OS. After a bit of trial/error, I found what was needed to get this working on the VCSA. Before jumping into the solution, I should say that this is really for lab and educational purposes. In general, I would NOT recommend installing additional software on the VCSA, not only is this NOT supported by VMware but you may also potentially be impacting your vCenter Server by taking resources away from the main application. It is possible to constrain the amount of resources (CPU/Memory) allocated to the Docker Container, please refer to this resource for more information.

For smaller customers, the argument is that I can just run everything on a single system but in reality there are many benefits to having a separate management VM which can be Photon OS or any other OS that your organization supports. You can install additional management tools/scripts and you would not be artificially limited by the VCSA's environment which is really locked down to what is absolutely needed to run the vCenter Server application and its services.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Given that PowerCLI Core (Linux and Mac OS X) was just recently released, which also includes a Docker Container, I figure this would be a nice example to start with as I know a few of you have asked about this possibility 🙂

Step 1 - Install Docker by running the following command (you will need access to the internet either direct or proxy access from the VCSA)

tdnf -y install docker

Step 2 - Load the following kernel module which will allow us to start the Docker client by running the following command:

insmod /usr/lib/modules/$(uname -r)/kernel/net/bridge/bridge.ko

Note: The above command does not persist across reboots. If you would like to persist this configuration, please refer to the instructions at the very bottom.

Step 3 - Enable and start the Docker Client by running the following command:

systemctl enable docker
systemctl start docker

Step 4 - Pull down the PowerCLI Core Docker Image from Docker Hub by running the following command:

docker pull vmware/powerclicore

Step 5 - Start the PowerCLI Core Docker Container by running the following command:

docker run --rm -it --entrypoint='/usr/bin/powershell' vmware/powerclicore

As you can see from the screenshot above, you now have PowerShell and the PowerCLI module loaded running as a Docker Container on the VCSA 🙂 You can apply this to any Docker Container that you have created or pulling it directly from Docker Hub. If you prefer to build the PowerCLI Core Docker Container from the Dockerfile, you simply just need to download and extract the PowerCLI Core zip file onto the VCSA and then run the following command:

docker build -t vmware/powercli .


How to persist bridge module load across reboots:

Step 1 - Edit /etc/modprobe.d/modprobe.conf and remove the "install bridge /bin/false" entry.

Step 2 - Create a new file called /etc/modules-load.d/bridge.conf which contains the word "bridge" (no quotes). When the system boots up, it will iterate through all the module configuration file and load the respective modules. The bridge module is what is needed to start the Docker Daemon.

How to check the size of your Config & SEAT data in the VCDB in vPostgres?

After publishing my article on how to check the size of your vCenter Server's Configuration and Stats, Events, Alarm & Tasks (SEAT) data for both a Microsoft SQL Server and Oracle based database, I had received a few requests for doing the same for the vPostgres database which the vCenter Server Appliance (VCSA) uses exclusively. Thanks to one of our Engineers who works on the VCDB, I was able to quickly get the relevant SQL query to perform the exact same lookup as the other two databases.

Since the VCSA is harden and locked down by default, being able to remotely retrieve this information will actually require some additional configuration changes to your VCSA which may or may not be acceptable. Because of this constraint, I will provide two options in how you can perform this SQL query.

The first option (easy) will be running the SQL query directly from within the VCSA. You just need SSH access and no other information or credentials will be required. The second option (complex) will be to remotely connect to the vPostgres database (generally not recommend) which will require the VCDB's credentials which I will show you how to retrieve. Lastly, I want to quickly mention that in the upcoming vSphere 6.5 release, this information will be super easy to view not only from a UI but also API as shown in tweet below.


Option 1:

Step 1 - Download the following shell script called queryVCDBvPostgres.sh which contains the respective VCDB SQL query.

Step 2 - SCP the shell script to your VCSA and then login via SSH.

Step 3 - Run the following command to make the script executable:

chmod +x queryVCDBvPostgres.sh

Step 4 - Run the script by issuing the following command:


Here is a screenshot of what you should see which is a break down of your Config + SEAT data:


Option 2:

Step 1 - Login to the VCSA using SSH.

Step 2 - Edit /storage/db/vpostgres/postgresql.conf and add the following entry:

listen_addresses = '*'

This will allow vPostgres to be connected to from any address or if you want to restrict it to a specific IP, you can also just specify that.

Step 3 - Edit /storage/db/vpostgres/pg_hba.conf and add the following entry:

host    all             all               md5

Similiar to the previous configuration, you can either specify a network range using CIDR notation or a specific IP Address.

Step 4 - Edit /etc/vmware/appliance/firewall/vmware-vpostgres and replace it with the following entry:

This will open up the VCSA's firewall to allow remote connections to the vPostgres port which the default is 5432.

Step 5 - Next, we need to reload the firewall configuration by running the following command:


Step 6 - We can verify by running the following command:

iptables -L | grep postgres

Here is a screenshot of what you should see as the output:

Step 7 - Lastly, we need to restart the vPostgres service by running the following command:

service vmware-vpostgres restart

Step 8 - To verify that you can now remotely connect to the vPostgres DB, run the following command:

netstat -anp | grep LISTEN | grep tcp | grep 5432

Here is a screenshot of what you should see as the output:

At this point, you have now enabled remote connections to the VCSA's vPostgres DB. The next step is to retrieve the VCDB credentials which you will do so using a PowerShell script that I have written to perform the remote SQL query. This will also require that you setup an ODBC connection on your client system to communicate with the vPostgres DB. Please have a look here for more information on how to setup the ODBC connection.

Step 9 - Login to VCSA via SSH and then look at the /etc/vmware-vpx/vcdb.properties and you should see the password to your VCDB. Go ahead and record this some where as you will need it in the next step. The username for the DB will be vc which you can also make a note of.

Step 10 - Download the following PowerShell script called Get-VCDBUsagevPostgres.ps1 and provide the connection details that you retrieved in Step 9. If everything was properly configured, you can run the PowerShell script and it should produce a similiar output as shown in the screenshot below.


PowerCLI Core is now available on Docker Hub!

The much anticipated PowerCLI Core was just released this week as a VMware Fling which allows you to run PowerCLI on Linux, Mac OS X or even as a Docker Container. This is HUGE if you ask me, especially for customers who would like the benefits of PowerCLI and not be forced to use a Windows system which it traditionally had required.

I personally have been using PowerCLI Core for quite some time now on my Mac OS X and the experience is exactly the same as you would find it on its Windows counterpart. The Docker Container is also a another great way to consume PowerCLI Core and I also use that quite frequently as well. One thing I felt that would make the Docker Container even easier to consume for those looking to do something really quick in PowerCLI or what I call "Just In time PowerCLI access" is to be able to quickly pull it down from Docker Hub rather than having to download bunch some files and then manually build it yourself (not that it is complicated) but sometimes speed is the game.

I had posted a tweet earlier this morning and literally a few hours later, my good friend Alan Renouf delivered the goods! In addition, you will also find that the new version of PowerCLI Core Docker Container is now using Photon OS image rather than Ubuntu as it previously did.

In addition to the three methods of consuming PowerCLI Core, you also now find it hosted on Docker Hub: https://hub.docker.com/r/vmware/powerclicore/

To access PowerCLI Core from Docker Hub, you simply just need a system installed with the Docker Client (Windows, Linux or Mac OS X) running or you can even use VMware's Photon OS which comes with Docker by default and following the instructions below:

Step 1 - Pull the PowerCLI Core image from Docker Hub by running the following command:

docker pull vmware/powerclicore

Step 2 - Run the PowerCLI Core Docker Container by running the following command:

docker run --rm -it --entrypoint='/usr/bin/powershell' vmware/powerclicore

It is literally that easy to access PowerCLI from ANY platform at ANY time! 😀

Nested ESXi Enhancements in vSphere 6.5

As many of you have probably heard by now, vSphere 6.5 was just announced at VMworld Barcelona this week and it is packed with a ton of new features and capabilities which you can read more about here. One area that is near and dear to me and has not been covered are some of the Nested ESXi enhancements that have been made in vSphere 6.5.

To be clear, VMware has NOT changed its support stance in vSphere 6.5. Both Nested ESXi as well as general Nested Virtualization is still NOT officially supported. Okay, so thats out of the way now, lets see what is new?

  • Paravirtual SCSI (PVSCSI) support
  • GuestOS Customization
  • Pre-vSphere 6.5 enablement on vSphere 6.0 Update 2

Lets take a closer look at each of these enhancements.

Paravirtual SCSI (PVSCSI) support

When vSphere 6.0 Update 2 was released, I had hinted that PVSCSI support might be a possibility in the near future. I am happy to announce that this is now possible with running Nested ESXi and having the guestOS as ESXi 6.5. In vSphere 6.5, a new GuestOS type has been introduced called vmkernel65 which is optimized for running ESXi 6.5 in a VM as shown in the screenshot below.

As you can see from the VM Virtual Hardware configuration screen below, both the PVSCSI and VMXNET3 adapter are now the recommended default when creating a Nested ESXi VM for running ESXi 6.5.

Similiar to the VMXNET3 driver, the PVSCSI driver is automatically bundled within the version of VMware Tools for Nested ESXi. That is to say, the drivers are included in the default ESXi image itself and is ONLY activated when ESXi detects that it is running inside of a VM. From a user standpoint, this means there are no additional configurations or installations that is required. You simply select ESXi 6.5 as the GuestOS and install ESXi as you normally would and this will automatically be enabled for you.

The only requirement to leverage this new capability is that BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. The underlying physical ESXi host can either be ESXi 6.0 or ESXi 6.5. In addition to new virtual hardware defaults, I have also found that the new ESXi 6.5 GuestOS type now uses EFI firmware over the legacy BIOS compared to previous ESXi 6.x/5.x/4.x GuestOS types.

For customers who wish to push their storage IO a bit more for Nested ESXi guests, this is a great addition, especially with lower overhead when using a PVSCSI adapter.

GuestOS Customization

One of the very last capability that has been missing from Nested ESXi is the ability to perform a simple GuestOS customization when cloning or deploying a VM from template running Nested ESXi. Today, you can deploy my Nested ESXi Virtual Appliance which basically provides you with the ability to customize your deployment but would it not be great if this was native in the platform? Well, I am pleased to say this is now possible!

When you go and clone or deploy a VM from template that is a Nested ESXi VM, you will now have the option to select the Customize guest OS option. As you can see from the screenshot below, you can now create a new Customization Spec which is based on the Linux customization spec. The customization only covers networking configuration (IP Address, Netmask, Gateway and Hostname) and only applies it to the first VMkernel interface, all others will be ignored. The thinking here is that once you have your Nested ESXi VM on the network, you can then fully manage and configure it using the vSphere API rather than re-creating the same functionality just for cloning.

To use this new Nested ESXi GuestOS customization, there are two things you will need to do:

  • Perform two configuration changes within the Nested ESXi VM which will prepare them for cloning. You can find the configuration changes described in my blog post here
  • Ensure BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. This means that your underlying physical vSphere infrastructure can be running either vSphere 6.0 Update 2 or vSphere 6.5

You can monitor the progress of the guest customization by going to the VM's Monitor->Tasks & Events using the vSphere Web Client or vSphere API if you are doing this programmatically. Below is a screenshot of a successful Nested ESXi guest customization. If there are any errors, you can take a look at /var/log/vmware-imc/toolsDeployPkg.log within the cloned Nested ESXi VM to determine what went wrong.

I know this will be a very welcome capability for customers who extensively use the guest customization feature or if you just want to quickly clone an existing Nested ESXi VM that you have already configured.

Pre-vSphere 6.5 enablement in vSphere 6.0 Update 2

By now, you probably have figured out what this last enhancement is all about 🙂 It is exactly as it sounds, we are enabling customers to try out ESXi 6.5 by running it as a Nested ESXi VM on your existing vSphere 6.0 environment and specifically the Update 2 release (this includes both vCenter Server as well as ESXi). Although this has always been possible with past releases of vSphere running newer versions, we are now pre-enabling ESXi 6.5 specific Nested ESXi capabilities in the latest release of vSphere 6.0 Update 2. This means when vSphere 6.5 is generally available, you will be able to test drive some of the new Nested ESXi 6.5 capabilities that I had mentioned on your existing vSphere infrastructure. This is pretty darn cool if you ask me!?

For those of you who use Nested ESXi, hopefully you will enjoy these new updates! As always, if you have any feedback or requests, feel free to leave them on my blog and I will be sure to share it with the Engineering team and who knows, it might show up in the future just like some of these updates which have been requested from customers in the past 😀