Minimum permissions to view VM Storage Policies

I saw this question from Aaron yesterday while scrolling through my Twitter timeline and after answering it, I figure I write a quick blog post about it in case this comes up in the future.

There are two specific privileges around managing VM Storage Policies: Update and View as shown in the screenshot below. If you only want to allow users to be able to see all the available VM Storage Policies that have been defined, then you just need to create a new Role with only the "View" privilege.

Secondly, it is important to note that VM Storage Policies are defined and managed at a vCenter Server level. This means that when you assign the permission, it needs to be applied at the root vCenter Server level (you do not have to propagate it down wards if you do not wish to show the rest of the vSphere Inventory). Global permissions are not required, but if you have multiple vCenter Servers which are all part of the same SSO Domain, you may want to consider this if users are allowed to login to any one of the vCenter Servers.

Once you have assigned the permission to either the user or group, then you can have them login using either the vSphere Web Client or using the SBPM APIs and you will now be able to view all defined VM Storage Policies.

SPBM APIs are now included in pyvmomi (vSphere SDK for Python)

I have been spending quite a bit of time lately with PowerCLI Core, especially with one of my pet projects. One of the limitations that PowerCLI Core has today is that the Storage cmdlets which includes vSAN and VVol functionality has not been ported over yet. This means that if you need to do something with VM Storage Policies for example, it would not be possible with PowerCLI Core and you would have to use the Windows PowerCLI version instead.

While investigating for an alternative solution to PowerCLI Core to get access to the Storage Policy Based Management (SPBM) APIs, I was pleasantly surprised to learn that pyvmomi (vSphere SDK for Python) had recently added support for the SPBM APIs in their 6.0.0.2016.4 release last year. I had accidentally stumbled onto this news while looking through the pyvmomi Github issues, specifically this one here. I was surprise to see there was no mention of this enhancement in the pyvmomi release notes.

This is great news for pyvmomi consumers and given this was news to me, I am guessing it might be news for others so figure I would also share the info. While looking into using the SPBM APIs from pyvmomi, I did not see any sample scripts showing how to use the SPBM API. Given I needed to write a script for my project, I figure I would also create a couple of examples to help others get started.

Continue reading

Using vSphere Auto Deploy to Netboot ESXi onto Apple Mac Hardware

Last week I published an article that demonstrated for the first time on how to netboot an ESXi installation onto Apple Mac Hardware. As you can imagine, this was very exciting news for our VMware/Apple customers, who historically have not had this capability before. Customers can now automate and install ESXi over the network onto their Apple Mac Hardware just like you would for other non-Apple hardware.

With the ability to boot ESXi over the network for Apple Mac Hardware, it is now also possible for customers to take advantage of the vSphere Auto Deploy feature. Auto Deploy allows customers to easily and quickly provision ESXi hosts at scale and integrates directly with vCenter Server to automatically join and apply specific defined host configuration policies. This is a great time to check out Auto Deploy, especially with all the new enhancements that were introduced in vSphere 6.5 like custom script bundles for example.

Below are the instructions on how to setup Auto Deploy to work with Apple Mac Hardware.

Continue reading

How to Netboot install ESXi onto Apple Mac Hardware?

The ability to perform an ESXi Scripted Installation over the network has been a basic capability for non-Apple hardware customers since the initial release of classic ESX. However, for customers who run ESXi on Apple Mac Hardware (first introduced in vSphere 5.0), being able to remotely boot and install ESXi over the network has not been possible and customers could only dream of this capability which many of us have probably taken for granted.

Unlike traditional scripted network installations which commonly uses Preboot eXecution Environment (PXE), Apple Mac Hardware actually uses its own developed Boot Service Discover Protocol (BSDP) which ESXi and other OSses do not support. In addition, there are very few DHCP servers that even support BSDP (at least this may have been true 4 years ago when I had initially inquired about this topic). It was expected that if you were going to Netboot (equivalent of PXE/Kickstart in the Apple world) a server that you would be running a Mac OS X system. Even if you had set this up, a Netboot installation was wildly different from a traditional PXE installation and it would be pretty difficult to near impossible to get it working with an ESXi image. With no real viable solution over the years, it was believed that a Netboot installation of ESXi onto Mac Hardware just may not be possible.

tl;dr - If you are interested in the background to the eventual solution, continue reading. If not and you just want the goods, jump down a bit further. Though, I do think it is pretty interesting and worth getting the full context 🙂

Continue reading

New "raw" VM Storage Policy support in OVFTool 4.2 simplifies bootstrapping VMs onto VSAN

Several weeks back I came across a handy little tidbit on an enhancement that was added to the latest version of OVFTool (4.2) that greatly simplifies the bootstrapping of a vCenter Server or any other VM for that matter onto a vSAN Datastore. This is generally used when setting up a pure greenfield environment where your vCenter Server may not be setup yet and you want to run it on top of vSAN which is fully supported configuration. The process of "bootstrapping" a VCSA onto a single node vSAN datastore is documented on my blog here. The high level steps are as follows:

  1. Temporarily change default ESXi VM Storage Policy to allow forced provisioning and FTT=0 (e.g. no protection)
  2. Claim disks for creating vSAN Diskgroup(s)
  3. Create vSAN Cluster
  4. Deploy VCSA
  5. Apply the default vSAN VM Storage Policy to VCSA VM
  6. Revert the temporarily change of default ESXi VM Storage Policy

With this new OVFTool enhancement, Step 1 and 6 is no longer needed from the standpoint of needing to change the default VM Storage Policy on the ESXi host using the ESXi Shell or using the vSphere API. Instead, you can now pass in a "raw" VM Storage Policy (SPBM) to apply to the specific OVF/OVA that is being deployed rather having to make a global change on the ESXi host. This also helps reduce the post-deployment steps as you only need to re-apply the default vSAN VM Storage Policy to the vCenter Server VM and not have to touch ESXi host settings once the vCenter Server is up and running.

To use this new raw VM Storage Policy feature in OVFTool, there is a new command-line option called --defaultStorageRawProfile which accepts the "raw" VM Storage Policy as you would normally provide to SPBM APIs such as "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))" for example.

The really cool thing about this feature is that you can take advantage of this directly with the new OVFTool argument passthrough feature that was introduced in the VCSA 6.5 CLI deployment utility. Combining these solutions together, you can easily simplify the "bootstrapping" of a VCSA onto a vSAN Datastore. Below is the snippet that you would include in the VCSA JSON configuration file used for deployment.

"ovftool.arguments" : {
"defaultStorageRawProfile" : "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
}

Although a tiny enhancement, I think this is a pretty neat capability, especially being able to make use of it natively within the VCSA configuration file. It is definitely great to see us continue to simplify how VMware management infrastructure is deployed and definitely stay tuned on what else we have cooking in the future for this particular area 🙂