GCP Plugin for vRO: Latest Version and Hierarchical Dynamic Types

Last year, I did a first evaluation of the GCP Plugin and was providing additional feedback directly to the plugin developers. This blog shows the great progress that was made with the latest release ( and provides some contributions from my side to enhance the delivered functionality with hierarchical dynamic types.

Check my previous post for an introduction to the GCP plugin and the release notes for an overview of all additions and improvements.

Compare Versions

By Inventory View

The right-hand side depicts the new entities and services highlighted in yellow. The GCP plugin development team has achieved quite a lot in the last 6 months. All these inventory objects come together with specialized and attached workflows as well with some workflows to generate the necessary vRA deployment artefacts. We were forced to use a proxy, filed the requirement and only a few days later, we got the update with a full fledged proxy configuration – just in time for an important demo.

Kubernetes (GKE Clusters)

As you can see from my first inventory view, I used GKE Clusters right from the start, but I had to deploy them through the GCP web console. They were reflected, however, in the plugin as instance groups and instances. This feature was added very quickly after I asked for it and is quite handy to use on vRA.

Generic REST Call

As a fallback for missing functionality, I requested to have a generic REST call – leveraging the existing connection (credentials/OAuth token). This feature allows vRO developers to directly use the full Google API to enrich/implement features in vRO workflows. Let me provide a sample workflow to show the capabilities for vRA XaaS resource action developers. Here some quick and dirty output:

These calls could be used to attach the worker nodes as child elements to each cluster. Or to download the kube config for the cluster.

Deployment of vRA Integration Artefacts

Integration of a vRO plugin in vRA doesn’t come for free. You have to generate a lot of XaaS custom resources, blueprints and actions to make it usable. The GCP plugin team has done some effort on easing this deployment needs by generating the necessary types. I think there is still some work left to be done in this regard, especially if it comes to the integration within the new vRA 7.4/7.5 layout.

Hierarchical Dynamic Types

One of the drawbacks of the vRO plugin integration into vRA is the lack of hierarchical dynamic types that would allow to build the same tree structures as the plugin in vRO natively has. I already noted this in my previous post. You can find there the “Google Cloud Platform” tab as it looks in vRA < 7.4.

As we had this issue in our own projects as well, I looked for a solution for this and found a quite interesting approach to meet this requirement as well (here vRA < 7.4):

vRA supports hierarchies in the dynamic type tabs out of the box if the parent catalog resource id is correctly set. In vRA 7.4 onwards the dynamic type tabs were completely dropped, and any provisioned dynamic type will appear like a normal blueprint deployment entry. Thus, the need for hierarchies will even grow because you are loosing all the relations between the single assets.

Enabling Dynamic Types Hierarchies

Please install this package in vRO and start the workflow “InstallLibrary”. It works only if your vRO has ssh access to the vRA instance. It will grab the solution user OAuth Client and the password for it and put it into a configuration element in vRO. In a second step a new RESTHost will be created used for working with provider resources.

vRA has a nice microservice architecture and the core is extended by provider services. Each plugin is such a provider and will create/delete provider resource entries in the database on provisioning/removal. These provider resources are the input to create the vRA catalog resources that are visible in vRA. The solution user has the right to create or update catalog resources with provider resources – that’s were my solution approach started. This way it is possible to set the parent resource – or manipulate other attributes like deployment name or similar – without messing directly with the database.

To test my solution with the gcp plugin I have picked the following use cases:

GCP Connection is the top-level object (as in the plugin inventory) and is providable through an XaaS Blueprint entitled in the catalog.

All subsequent provisioning of entities on the connections are performed as child elements of this connection. Thus, creation of other entities are always resource actions of the connection or even below.

First use case: create a GKE cluster as child of the connection.

Second use case: create a bucket below the connection and a storage object below the bucket.

Changes needed for second level objects (directly below the connection):

Add an asynch workflow element with my workflow “EstablishParentChildRelation”.
If you want to go a level deeper, you need to know, that vRA actions pass one parent relation only to the workflows. The storage object workflow needs the connection and the bucket as input, but only the bucket is provided. Change the workflow and set the connection as an optional only field (workflow designer, presentation – select connection, set mandatory input to false).

Now you have to instantiate the connection if not given as input. Luckily, the GCP plugin uses a nice naming convention within the field “dunesId”: you’ll find the hierarchy nicely put and comma separated.

if (!connection) {
    var bucketDunesId = System.getModule("com.swisscom.gcp.addons").getProviderResource(
            bucket, "dunesId");
    var parentUri = "dunes://service.dunes.ch/CustomSDKObject?id='" + bucketDunesId.split(",")[0] + "'&dunesName='GCP:Connection'";
    connection = Server.fromUri( parentUri );

As you can see, I make use of the magical Server.fromUri call with the proprietary dunes service URIs.

Now we need some changes in the vRA XaaS definitions. First, the GCP Connection blueprint that is created by the google plugin can be used without changes.

For the resource actions I had to adapt every tested action, because the check box “provisioning” was not correctly set:

If this box is not set, the workflow will be successfully triggered, but no representation in vRA will be created at all. The same is true for the “Disposal” check box, if you want to implement a delete action. In this case the resource would be deleted on GCP but the representation in vRA would be left intact. In addition, a “Disposal” is applied on the element it is attached to – if you attach an action “Delete Storage Object” on the Connection, the workflow will delete the storage object on GCP and the vRA representation of the GCP Connection – and that is not really intended.

After having done these adaptions for GKE Cluster, Bucket and Storage Object I could finally achieve this view in the new vRA layout: everything is grouped below the GCP Connection:

And in the deployment detail you will find the full functional tree with enabled resource actions for each asset:


If you need to enforce a strong multi-tenancy, you cannot grant XaaS Architect roles to normal users. Creating new XaaS Blueprints other then the GCP Connection will create multi-tenancy breaches: every list box for GCP plugin object is not multi-tenanted. This could only be improved by filtering the content of these controls by the execution context (tenant, business group). I think this is quite hard to achieve, if possible at all.

Wrap up

The GCP plugin for vRO development team has proven a very good velocity and was very supportive for active users. I think the plugin is a very good choice for integrating GCP within an enterprise. For a service provider the vRO plugin limitations are quite hard to overcome, but it is worth to check it out depending on the roles that are given to customers.

Important to mention are the capabilities of the new Cloud Assembly Service from VMware that should be available as onsite offering as well – the successor of vRA 7.x. I assume that GCP will be fully integrated as first class citizens like AWS and Azure.