Now that we have a functional PKS deployment, an optional but very useful add-on to deploy and integrate with PKS is the VMware Harbor solution. Harbor is an Enterprise-class container registry that customers can run within their own Datacenter to securely store and provide access to container images used by their development teams. The process of deploying Harbor is similiar to PKS. You will need to download the Harbor Tile from Pivotal Network, import that into Ops Manager and then configure and deploy using the same interface.

If you missed any of the previous articles, you can find the complete list here:

Step 1 - Download the Harbor Tile from here and then import the file (harbor-container-registry*.pivotal) using Ops Manager. Once it has been imported, go ahead and add the Tile into the Installation Dashboard and click on the Harbor Tile to begin the configuration.


Step 2 - In this first section, we will define the AZ and network used to deploy the Harbor VM. This will be AZ-Management and pks-mgmt-network as outlined in the screenshot below.


Step 3 - In this next section, we must define the DNS hostname that we want to use for our Harbor VM. This DNS entry must be resolvable by your PKS management infrastructure and you can create the DNS entry after Harbor has been deployed by BOSH so that you can identify the IP Address that was selected.


Step 4 - In this section, we need to create a certificate for Harbor. Go ahead and specify *.[DNS-DOMAIN-NAME] and click on the generate button.


Step 5 - In this section, we specify the admin password for Harbor.


Step 6 - In this section, we define which authentication source we would like to use to allow users to connect to Harbor from their Docker clients. Since we have already configure UAA when we had configured the PKS Control Plane VM, we will go ahead and use that but you have a few other options to select from.


Step 7 - In this section, we specify the location to store our container images which can either be on the local file system of Harbor or using Amazon S3.


Step 8 - The remainder 5 settings: Clair, Notary, Resource Config and Stemcell can all be left with their defaults.

Step 9 - At this point we have completed the Harbor configurations. It is now time to apply the changes and begin the Harbor VM deployment. Return to the Ops Manager home page and then click on the "Apply Changes" button to start.


The deployment will take some time and you can expand the verbose output to get more details. In my environment, it took ~25min and this is a good time to take a coffee, tea or beer break.


Step 10 - Once the deployment has completed successfully, you can click on the Status tab to locate the IP Address that was selected for the Harbor VM and update DNS to now point the hostname you had specified earlier.


Step 11 - Before we can integrate Harbor with PKS, we need to create a trust between the two systems. Navigate back to the Ops Manager home page and then click on your user name at the upper right hand corner and then select Settings.


Step 12  - Select the Advanced tab and then click on "Download Root CA Cert" to download the certificate to your desktop.


Step 13 - Navigate back to the Ops Manager home page and click on the BOSH Tile. Go to the "Security" section and now copy and paste the contents of the certificate you had downloaded in the previous step into the "Trusted Certificates" box and then click Save.


Since we made a change to the BOSH Tile, we need to click on the "Apply Changes" button to update our configuration. This process took ~20minutes to complete, time for another break 🙂


Step 14 - At this point, we are now ready to start using Harbor! Open up a web browser and specify the DNS hostname of your Harbor instance which should take you to the login page. Login using the "admin" username and the password that you had configured earlier.


Step 15 - Harbor comes with a default project called "library" which you can start using to store your Docker Containers or you can create a new project. In my example, I will use the default. Next, we need to authorize users to be able to publish and consume images stored in Harbor. In the Project configuration, go to the "Members" tab and then enter a user from the authentication source you had setup in Step 6. In my case, this is the user I had created in PKS's UAA system which is lamw and it should automatically pickup the user while entering the input. You then select a role, whether thats a Project Admin, Developer or Guest. Since we want to be able to push/pull content, I will specify the Developer role.


Step 16 - For Docker Clients to be able to push content into Harbor, they also need to setup a trust using the same certificate that was downloaded earlier in Step 12. Copy the certificate, which we can simply name ca.crt to the PKS Client VM. Next, run the following commands to install Docker Client (if you do not already have that running) and create the following directory which should map to the DNS hostname of your Harbor instance:

apt install -y docker.io
mkdir -p /etc/docker/certs.d/pks-harbor.primp-industries.com

Next, move the ca.crt file into /etc/docker/certs.d/pks-harbor.primp-industries.com by running the following command:

mv ca.crt /etc/docker/certs.d/pks-harbor.primp-industries.com

For the Docker Client to pick up the certificate, go ahead restart the service by running the following command:

systemctl stop docker
systemctl start docker

Step 17 - If everything was succesful, you should be able to login to Harbor by running the following command and specifying the username/password that you have enabled:

docker login pks-harbor.primp-industries.com


Step 17 - At this point, we are now ready to push images into our private container registry. Below are four Docker Containers which I have already pulled down and we want to make these available directly within Harbor. If you have not done this, you can do so by running the following commands:

docker pull mreferre/yelb-ui:0.3
docker pull mreferre/yelb-appserver:0.3
docker pull mreferre/yelb-db:0.3
docker pull redis:4.0.2


Step 18 - We now need to tag the containers so it contains the destination of our Harbor instance, the project name and finally the name of the container. As you can see from the commands below, we are specifying the DNS hostname of our Harbor instance and because we are using the default project called "library", we need to append that along with the name of the container and version.

docker tag mreferre/yelb-ui:0.3 pks-harbor.primp-industries.com/library/yelb-ui:0.3
docker tag mreferre/yelb-appserver:0.3 pks-harbor.primp-industries.com/library/yelb-appserver:0.3
docker tag mreferre/yelb-db:0.3 pks-harbor.primp-industries.com/library/yelb-db:0.3
docker tag redis:4.0.2 pks-harbor.primp-industries.com/library/redis:4.0.2


Step 19 - Now we are ready to push these images into our private container registry by running the following command which simply references the tag names we had used in the previous step:

docker push pks-harbor.primp-industries.com/library/yelb-ui:0.3
docker push pks-harbor.primp-industries.com/library/yelb-appserver:0.3
docker push pks-harbor.primp-industries.com/library/yelb-db:0.3
docker push pks-harbor.primp-industries.com/library/redis:4.0.2


Once the container upload has finished, we can navigate back to our Harbor UI and we should now see our four Docker Containers now residing in Harbor.


Step 20 - We can now re-deploy our application but rather than using an image from Docker's registry which requires internet access, we can actually refer to our Harbor instance. I have created an updated YAML called yelb-lb-harbor.yaml which you can download here that now refers to our Harbor instance for the container images. Below is a quick diff between yelb-lb.yaml and yelb-lb-harbor.yaml and you can see the difference in the location of our Docker Containers. You obviously will want to update the names based on what you have deployed in your own environment.


This concludes our PKS getting started series, I hope you have enjoyed it as much as I have. I know the PKS team is hard at work on adding even more functionality and enhancements, so definitely keep an eye out for future updates from both Pivotal and VMware. I may follow-up with one additional blog post (based on free time) around Automating the deployment of a basic vSphere and NSX-T infrastructure (leveraging Nested ESXi) to help folks with PKS evaluation and learning purposes.

Thanks for the comment!