Deploying Kubernetes with vRealize Automation (reloaded)
As kubernetes is taking over the container world and placed as de-facto standard, we face a high demand from our enterprise customers to consume self-service clusters. Offering a kubernetes service at scale requires a strong standardized and automated setup. Can vRealize Automation meet these requirements?
One Ring to Free Them All
I’m currently involved in evaluating kubernetes frameworks (CFCR, PKS, Openshift) and have found on twitter this excellent blog by Mark Brookfield (aka @virtualhobbit).
This was the starting point for me to deep dive into the blueprint for vRA. It worked out of the box but was not customized to use the vSphere cloud provider (CPI) – a disadvantage if you want to introduce persistent volumes.
This blog post is adding the necessary steps to enable vSphere storage classes and introduces some sample application (wordpress) enabling you to directly try it out. I did this blog to give some of my experience with the vSphere CPI back to the VMware community selecting me the second year in a row as an #vExpert. Thanks a lot guys!

Gathering of Virtual Hobbit and Virtual Gandalf
My first blueprint looks exactly like the one of Mark (no change):

The only change is necessary in the “configure” step of the “Kubernetes” software component that Mark defined as follows:
#!/bin/bash # Set a proper PATH, 'cos vRA um, doesn't... export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # Checks if this host is the Master if [ $role == "Master" ]; then echo "This host is the master, running kubeadm init" # Enable and start the services /usr/bin/systemctl enable kubelet /usr/bin/systemctl start kubelet # Disable system swap /usr/sbin/swapoff -a # Initialize cluster /usr/bin/kubeadm init # Get Kubernetes version export KUBECONFIG=/etc/kubernetes/admin.conf export kubever=$(kubectl version | base64 | tr -d '\n') # Install pod network /usr/bin/kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" # Export token for joining the cluster nToken=$(kubeadm token create --print-join-command) export nToken else echo "This host is a node, joining Kubernetes cluster" # Disable system swap /usr/sbin/swapoff -a # Joining node to cluster $nTokenN fi
My adapted version looks as follows, due to the limitation of this wordpress installation, I will comment the changes sepearately at the end:
#!/bin/bash # Set a proper PATH, 'cos vRA um, doesn't... export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # Checks if this host is the Master if [ $role == "Master" ]; then echo "This host is the master, running kubeadm init" # Enable and start the services /usr/bin/systemctl enable kubelet /usr/bin/systemctl start kubelet # Disable system swap /usr/sbin/swapoff -a # enable vsphere provider /usr/bin/kubeadm init #--config /etc/kubernetes/manifests/kubeadm.conf # Get Kubernetes version export KUBECONFIG=/etc/kubernetes/admin.conf export kubever=$(kubectl version | base64 | tr -d '\n') # Install pod network /usr/bin/kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever" # Export token for joining the cluster nToken="$(kubeadm token create --print-join-command)" export nToken echo "nToken: $nToken" # enable vsphere provider cat < /etc/kubernetes/manifests/vcp_secret.yaml apiVersion: v1 kind: Secret metadata: name: vsphere-cloud-provider-secret namespace: vmware type: Opaque data: # base64 encode username and password # vc_admin_username ==> echo -n 'Administrator@vsphere.local' | base64 # vc_admin_password ==> echo -n 'Admin!23' | base64 # vcp_username ==> echo -n 'vcpuser@vsphere.local' | base64 # vcp_password ==> echo -n 'Admin!23' | base64 # vc_admin_username: vc_admin_password: vcp_username: vcp_password: stringData: vc_ip: "" vc_port: "443" # datacenter is the datacenter name in which Node VMs are located. datacenter: "" # default_datastore is the Default datastore VCP will use for provisioning volumes using storage classes/dynamic provisioning default_datastore: "" # node_vms_folder is the name of VM folder where all node VMs are located or to be placed under vcp_datacenter. This folder will be created if not present. node_vms_folder: "" # node_vms_cluster_or_host is the name of host or cluster on which node VMs are located. node_vms_cluster_or_host : "" # vcp_configuration_file_location is the location where the VCP configuration file will be created. # This location should be mounted and accessible to controller pod, api server pod and kubelet pod. vcp_configuration_file_location: "/etc/kubernetes/cloud-config.yaml" # kubernetes_api_server_manifest is the file from which api server pod takes parameters kubernetes_api_server_manifest: "/etc/kubernetes/manifests/kube-apiserver.yaml" # kubernetes_controller_manager_manifest is the file from which controller manager pod takes parameters kubernetes_controller_manager_manifest: "/etc/kubernetes/manifests/kube-controller-manager.yaml" # kubernetes_kubelet_service_name is the name of the kubelet service kubernetes_kubelet_service_name: kubelet.service # kubernetes_kubelet_service_configuration_file is the file from which kubelet reads parameters kubernetes_kubelet_service_configuration_file: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" # configuration back up directory configuration_backup_directory: "/configurationbackup" # rollback value: off or on enable_roll_back_switch: "off" EOF #deploy vSphere pod and daemon sets curl https://raw.githubusercontent.com/vmware/kubernetes/enable-vcp-uxi/vcp_namespace_account_and_roles.yaml \ | kubectl apply -f - kubectl apply -f /etc/kubernetes/manifests/vcp_secret.yaml curl https://raw.githubusercontent.com/vmware/kubernetes/enable-vcp-uxi/enable-vsphere-cloud-provider.yaml \ | kubectl apply -f - # enable vsphere provider else echo "This host is a node, joining Kubernetes cluster" # Disable system swap /usr/sbin/swapoff -a # enable vsphere provider /bin/sed -i "s/ --kubeconfig=\/etc\/kubernetes\/kubelet.conf/ --kubeconfig=\/etc\/kubernetes\/kubelet.conf --cloud-provider=vsphere/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # enable vsphere provider # Joining node to cluster echo "executing nTokenN $nTokenN" $nTokenN fi
As it is written…
The additions were made according to following manuals:
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/existing.html
https://github.com/vmware/kubernetes/blob/enable-vcp-uxi/README.md
The lines between “# enable vsphere provider” were added and are enabling the vSphere cloud provider interface (CPI). Having this successfully applied you will be able to create persistent volumes, either created manually or dynamically provisioned. Let me walk you through a sample application to test the functionality.
The Fellowship of the Ring
First check is the one Mark already proposed in his blog post:
SSH as root to the master and perform:
export KUBECONFIG=/etc/kubernetes/admin.conf kubectl get nodes NAME STATUS ROLES AGE VERSION root000341 Ready 10m v1.9.4 root000342 Ready 10m v1.9.4 root000343 Ready 10m v1.9.4 root000344 Ready 10m v1.9.4
Now let’s check if the vSphere provider was successfully deployed, like it is stated in the docs:
kubectl get pods --namespace=vmware NAME READY STATUS RESTARTS AGE vcp-daementset-3cgss 1/1 Running 0 6m vcp-daementset-b0sn2 1/1 Running 0 6m vcp-daementset-dc109 1/1 Running 0 6m vcp-daementset-nzsvb 1/1 Running 0 6m vcp-daementset-q356x 1/1 Running 0 6m vcp-manager 1/1 Running 0 7m
If these tests have passed, you can use my templates from GIT. First take https://raw.githubusercontent.com/swisscom/ac-caas-wordpress-sample/master/wp-internet-registries/vsphere-storage-class.yaml and adapt “datastore:” to a valid vSphere datastore name that is accessible from all nodes. Create the storage class by issuing
kubectl apply -f vsphere-storage-class.yaml
Now you have defined a storage class with the name “ds2000”. Whenever you create a persistent volume claim referencing that name, the given datastore will be used to dynamically create a VMDK-file holding your persistent storage. Let’s use this capability by deploying a wordpress installation with a mysql database, both using persistent volumes to store data and images/plugins. First you have to create a secret that is holding the password of the mysql database:
kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD
afterwards you can deploy mysql and wordpress:
curl https://raw.githubusercontent.com/swisscom/ac-caas-wordpress-sample/master/wp-internet-registries/mysql-wordpress.yaml | kubectl apply -f – curl https://raw.githubusercontent.com/swisscom/ac-caas-wordpress-sample/master/wp-internet-registries/wordpress-web.yaml | kubectl apply -f –
It will take some time, and I usually get a timeout mounting the volume but in the end it should look like this:
kubectl get pods NAME READY STATUS RESTARTS AGE wordpress-c967fb584-fg4rj 1/1 Running 0 3m wordpress-mysql-7b8cf75944-jkrrb 1/1 Running 0 8m
You can check deployment for a pod like this
kubectl describe po wordpress-mysql-7b8cf75944-jkrrb
or get the logs with
kubectl logs wordpress-mysql-7b8cf75944-jkrrb
And what about this:
kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-017925a6-290a-11e8-9334-005056964cc4 20Gi RWO Delete Bound default/wp-wp-pv ds2000 1h pvc-5dda3d5e-2909-11e8-9334-005056964cc4 20Gi RWO Delete Bound default/wp-mysql-pv ds2000 1h
this shows the persistent volumes and you can even zoom in by this command
kubectl describe pv pvc-017925a6-290a-11e8-9334-005056964cc4 Name: pvc-017925a6-290a-11e8-9334-005056964cc4 Labels: <none> Annotations: kubernetes.io/createdby=vsphere-volume-dynamic-provisioner pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=kubernetes.io/vsphere-volume StorageClass: ds2000 Status: Bound Claim: default/wp-wp-pv Reclaim Policy: Delete Access Modes: RWO Capacity: 20Gi Message: Source: Type: vSphereVolume (a Persistent Disk resource in vSphere) VolumePath: StoragePolicyName: %v FSType: [NFS-2] kubevols/kubernetes-dynamic-pvc-017925a6-290a-11e8-9334-005056964cc4.vmdk %!(EXTRA string=ext3, string=)Events:
Here you can even see the physical path on your vSphere datastore, go and check if you find the files. And keep an eye on them, if they get property deleted, after you delete the pods!
OK, my post already gets more lengthy all the time. I think for K8s cracks we’re done at this point, but I will come in the next blog post with a very interesting addition for you: automated micro-segmentation and creation of ingress services based on NSX-V. Don’t miss it!
The Two Towers
If you reached this line, you want to see something of your very first wordpress installation on k8s!
Ok, if you check
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 2h wordpress ClusterIP 10.110.20.144 80/TCP 1h wordpress-mysql ClusterIP None 3306/TCP 1h
you’ll see that the service is deployed but only with an internal cluster-ip. How can we make it accessible? I cannot explain the whole theory here, but we have to create an ingress and for vSphere installations the recommended way is to use nginx-ingress together with an external load balancer. If you’re using NSX-V together with vRA, I would propose to go this path.
Make a copy of the original blueprint, call it Kubernetes II, and just add the load balancer to the picture.

Chose the Minions as Workers, take the same network for both arms of the load balancer and create a service HTTP 80 delegating to HTTP 30080 on the nodes. You do not have to set very specific settings, this is only a predefined test that is really not production ready. If this blueprint is ready, just create a request with it and let it run. Meanwhile you can try out the next steps on the original cluster – yes, from now on you’re having already two completely independent K8s clusters.
If you read manuals maybe the nginx-ingress link will not be ignored:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md
We’ll deploy the one with RBAC, should be easily done by performing
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \ | kubectl apply -f -
This is the basic runtime system for ingress thorugh the nginx engine.
Now we have to expose our wordpress service by specifying a service with a NodePort:
curl https://github.com/swisscom/ac-caas-wordpress-sample/blob/master/wp-internet-registries/ingress-service.yaml | kubectl apply -f -
and in the end you can expose these NodePorts with an ingress; download the file and adapt the hostname!
curl https://raw.githubusercontent.com/swisscom/ac-caas-wordpress-sample/master/wp-internet-registries/http-svc-ingress.yaml
Please adapt the FQDN to your choice!
kubectl apply -f http-svc-ingress.yaml
(Sorry, it can be that you have some issues with some files, please download them in this case and apply the local file in this case…)
So, at this point you should have an accessible installation even without loadbalancer. But how to access it?
kubectl get ingress --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default nginx-test wordpress.cloudlab.local 192.168.102.155 80 9s
Now you have to access this node with the given FQDN, if you do not have a DNS in place, you can enter it in your local hostfile:
192.168.102.155 wordpress.cloudlab.local
If I enter this well prepared
http://wordpress.cloudlab.local:30080/wp-admin/install.php
in my browser, I get this nice picture:

The Return of the King
And, of course, now your only seconds away from the next most useful wordpress blog site! But before you fall asleep, let’s do the real thing:
Repeat all the steps on the second cluster with the already deployed loadbalancer. Adapt you’re FQDN or hostfile to the IP of the load balancer. Now it should work if you query the site on the default HTTP/80.
And now you can reuse the blueprint to create as many clusters as you want. Making the network a variable you can even switch networks.
I hope I find the time to demonstrate how you can leverage the capablities of vRA together with NSX to create a rock solid microsegmentation and the day 2 action for scale in scale out to add or remove nodes. (Maybe Mark is keen on doing some more valuable work?).

As Mark already stated: happy clustering.
Don’t miss the next blog post in which I will share my secrets on how to automate all that stuff!
Hail to the Fellowship
And again: thank you @ericnipro @vCommunityGuy @smitmartijn and all of your teams for your work and what you do for us! Many thanks for being part of the @vExpert class of 2018.
Special thanks to Mark for his excellent blog post that was my inspiration.