Google Cloud Platform Plug-in for VMware vRealize Orchestrator

I am evaluating the features vRealize Automation is offering out of the box for hybrid cloud use cases. In this blog post I cover the Google Cloud Platform Plug-in for VMware vRealize Orchestrator that I could try out recently. It was developed in a collaboration of VMware and Google to ease workload provisioning to GCP.

During the innovation and planning sprint I took the time to get a first glimpse at the early access announced recently in this google blog. As I am very interested in hybrid cloud use cases, I have previously tested vRealize Automation integration for Azure and AWS and was keen to see now how the cooperation with google will pay off. In this blog you’ll find my takes and you can download ready to use blueprints to improve an already excellent user guide. You’ll find the link to register for early access in Shan’s blog, so don’t hesitate and try it out – you need only 30 minutes to set it up.

Setup Process

The installation is straight forward as with any vRO plugin. You can follow the user guide to create a project on GCP but don’t forget that google keeps your data safe:

Make sure that the highlighted APIs are enabled (Kubernetes Engine API is not used by the plugin currently, but for my extension wishes). After you have created the service account as documented in the user guide, check the permissions for this account:

I needed all these permissions (Kubernetes is – again – optional).

After you have successfully established a connection with GCP with the Create Connection workflow, you can install my package with blueprints, resource mappings and resource actions. I have added an import script for the cloud client for your convenience.

vRealize Orchestrator Integration

The plugin comes with a set of ready to use workflows and a set of scripting classes that can be used in custom workflows. The workflows and actions are implemented very clean (everything delegated under the hood) which is an advantage if you just want to used what is offered:

The provisioned assets are organized hierarchically and kept in the plugin’s inventory under the project connection as the root:

As usual you can trigger specialized workflows by right clicking the inventory entities.

vRealize Automation Integration

If you install my package and entitle all the blueprints and actions for your user, you should see a screen full of services that you can use out of the box. Let’s have a look at the instance provisioning:

If you have successfully provisioned some assets you will find them in your items under the “Google Cloud Platform” tab:

For the GCP:Instance I have added all available resource actions.

Hint: in the user guide the creation of Resource Mappings is not documented, but I think this is a well-known requirement for this type of vRO workflow integration in vRA.  In my package the resource mappings are contained.

Most of the entities were working without any intervention from my side, for the instance creation I had to add some values in the drop box of the script type. There are some actions missing – like for looking up valid script types – but it is working quite reliably for an early access release. Good job!

It is possible to create composite blueprints with the GCP assets, but I did not follow that path, yet. As teaser and illustration, I show an untested blueprint:

After successful provisioning you are able to see your assets also in the Google Cloud Console:

GCP API access

As I wanted to quench my thirst for GKS, I checked out the Kubernetes Management API and investigated what is needed to add the missing functionality. I failed to reuse the GCP:Connection scripting object in order to call a generic REST client. I dived into the code (scripting classes and plugin core) and found out, that a very generic approach for REST calls is used that is not accessible without changing the plugin code. I checked out the documentation on OAuth 2.0 authentication and had no problems to get an access token for a web server scenario (user is giving consent on the IDP). Postman has some nice features that facilitate this task:

Chose OAuth 2.0 for Auth and hit “Get New Access Token”.

Enter the needed parameters for getting the access and refresh tokens.

Postman displays the user consent dialog of the IDP (here google).

And finally, Postman is storing refresh and access token ready to use.

This is not a time-consuming challenge at all. But how to handle OAuth with a service account? Following the documentation you will find out that this is pretty easy provided you have access to cryptographic libraries for loading certificates and sign a JWT claim properly. But both requirements are not met on vRO – at least to the best of my knowledge.

I added the following maven dependency to my project:

<dependency>
  <groupId>com.google.auth</groupId>
  <artifactId>google-auth-library-oauth2-http</artifactId>
  <version>0.10.0</version>
</dependency>

and created the following snippet:

ServiceAccountJwtAccessCredentials creds = ServiceAccountJwtAccessCredentials.fromStream(getClass().getClassLoader().
getResourceAsStream("vro-gcp-evaluation.json"));
JsonWebSignature.Header header = new JsonWebSignature.Header();
header.setAlgorithm("RS256");
header.setType("JWT");
header.setKeyId(creds.getPrivateKeyId());

JsonWebToken.Payload payload = new JsonWebToken.Payload();
long currentTime = Clock.SYSTEM.currentTimeMillis();
// Both copies of the email are required
payload.setIssuer(creds.getClientEmail());
payload.setSubject(creds.getClientEmail());
payload.put("scope", "https://www.googleapis.com/auth/cloud-platform");
            payload.setAudience("https://www.googleapis.com/oauth2/v4/token");
payload.setIssuedAtTimeSeconds(currentTime / 1000);
payload.setExpirationTimeSeconds(currentTime / 1000 + 600);		    
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
String assertion = JsonWebSignature.signUsingRsaSha256(
      creds.getPrivateKey(), jsonFactory, header, payload);

This code snippet constructs a valid JWT claim and sign it into a valid assertion. With this string you can authenticate yourself by calling

curl -X POST \
  https://www.googleapis.com/oauth2/v4/token \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=<your assertion string>'

The result looks like this

{
    "access_token": "<access token for subsequent calls>",
    "token_type": "Bearer",
    "expires_in": 3600
}

With this token I could successfully create and query my GKS clusters:

curl -X GET \
  https://container.googleapis.com/v1beta1/projects/vro-gcp-evaluation/locations/-/clusters \
  -H 'Authorization: Bearer <token>' \
  -H 'Cache-Control: no-cache'

Let’s check again the Google Cloud Console for the Kubernetes Engine

And – surprise! – you will find it also in the plugin’s inventory, even if it was not provisioned by the plugin itself:

Key findings

The installation was quite handy and the application – provided you set up all permissions correctly – very stable. The preview release of the plug-in supports the following GCP resources

  • Compute Engine VM Instances
  • Compute Engine Instance Templates
  • Compute Engine Instance Groups
  • Compute Engine Images
  • Compute Engine Snapshots
  • Compute Engine Disks
  • VPC Networks
  • VPC Network Firewall Rules
  • Cloud Storage Buckets and Storage Objects
  • Some additional deployments like Spanner, CloudSQL and SQL Server Enterprise

vRealize Integration is not delivered out of the box but this is quite a common approach for plugin providers.

I personally find it suboptimal to hide the programmer from all complexity if the code is not fully open source. Currently I have a focus on k8s management and unfortunately this API is not yet part of the plugin. I looked for a way to extend the workflows but I did not find a way to reuse the GCP connection and to create additional API calls right away. I am in contact with Shan to learn more on a possible roadmap to  extend the functionality for the other great APIs GCP is providing. I’m sure he will keep you posted on the progres.

Open Issues/Tasks

  • I’d like to see support for the most important API particularly the Google Kubernetes Engine.
  • One of the most important drawbacks of the Dynamic Types for vRealize Automation is the lack of hierarchical objects. I.e. the GCP storage object should be a child of a GCP bucket. The current implementation of the plugin does not support this hierarchies, but this is more an issue in the vRA Dynamic Types framework that we have encountered in our own implementations as well. We have a pragmatic proposition how this requirement could be met and are collaborating closely with VMware to find a definitive solution.
    If you have the same requirements, please get in contact with me – I need some more VMware customers voting on this improvement.
  • I will implement asap an NSX edge provisioning that allows to have a VPN connection to GCP. We need to integrate assets on our VMware stack, our Swisscom services with the services provided by GCP.
  • We will do a deep dive on the synchronization of vRA with the GCP Plugin if we want to allow the use of the Google Cloud Console on the same account that is in use for the GCP plugin. The plugin inventory gets updated, the dynamic types not.
  • Multi-tenancy for accounts/billing and plugin inventory will be tested; this is a very crucial point for us as service providers. I assume this will not be that easy to solve and could only be mitigated by introducing a vRO per customer or by additional filtering on top of the vRO plugin.