vRealize Automation Cloud – Code Stream

This blog post in my series on vRealize Automation Cloud is covering Code Stream. I will walk you through the pipelines we're using in our breakout session HBO3559BE (US: HBO3559BU) at VMworld Barcelona. It will put the pieces together I have laid out in my previous posts

Code Stream

For an introduction into Code Stream I recommend the official documentation. I want to focus on the build pipeline we are using for our demo at VMworld:

 

 

Demo Pipeline with 2 Stages

The DEV stage triggers a Jenkins build in which a docker image will be published and deployed on a PKS cluster. At the end the stage is waiting until the newly deployed application is up and running.

The INT stage is creating project and subscriptions in another vRAC-Organization (set up), triggering integration tests within this project (test). After successful completion a mail is sent out to allow manual testing in the created environment. When the user is approving the „User Task“, the project gets deleted (tear down).

These tasks are built on the supported integrations (-> task types) and have clearly defined input and output data structures. You can define conditions for starting a task and defining success of the task. Furthermore, you can define another pipeline to perform rollbacks if needed and notifications for different outcomes. The integrations are on endpoints configured with URLs and credentials. You can simply reference an endpoint in the build task.

 

 

Endpoints used in the sample pipeline

Build tasks always come with the need to store variables. You can manage them on a global level (within the vRAC organization):

 

 

Global variables

Or you can hold them per pipeline:

 

 

Pipeline variables, here injected via GIT webhook

I will expand on the supported integrations in the following paragraphs, explaining both, the flow of this sample pipeline and the integration being used.

Jenkins Build

 

 

Jenkins Task Type

The first step is running on our Jenkins server, triggering the Job „cas-event-subscriber“. This is a maven project, checking out sources from GIT, do a full maven build with tests and at the end a docker image build step building and pushing a new image version to the docker registry.

PKS Deployment

 

 

Kubernetes Task with local yaml definition

The next steps are deleting the previous deployment and recreating it again with the new docker image*. The Kubernetes Cluster references one of the previously defined endpoints and hence doesn’t need any additional configuration. It is possible to use a K8s yaml directly from GIT.

*) I tried to do it in one step, but kubectl apply fails if the deployment is already existing.

Wait Step

 

 

Simple Script waiting for a response of the application

For all the scripting tasks I used a CI step. For this kind of task, you can define a docker host and a CI builder image per pipeline. My CI builder image is supporting bash and here we go: in this step I implemented a curl call on the web application in a loop that ends when the application is responding with a http 200 code.

Set up

 

 

Simple Script with a REST call

„Create Project“ and „Create Subscriptions“ are tasks that simply call the REST services I explained in detail in my last post. They are setting up a new project which will be the target for the following task. It is a Jenkins task executing another maven project with the integration tests. The integration testing consists of deploying a released blueprint (Open-Service-Broker-CAS) for each cloud zone that is configured for the test project. The provisioning is triggered over the Open-Service-Broker-CAS instance running on PKS itself. This way we can test the proper setup of the created project, the cloud agnostic blueprint on different public and private clouds and the functionality of the Open-Service-Broker-CAS at the same time. In addition, the ESC-Event-Subscriber is tested during the provisioning and disposal tests as well. We check if all expected events are received and the CMDB is correctly updated.

The ESC Event Subscriber is already described in a previous post, the Open Service Broker CAS will come in the next one. Long story short: the Event Subscriber receives all lifecycle events during provisioning and disposal, the Open Service Broker CAS is an adapter implementing the OSB Specification and transforming into vRAC API calls.

User Tasks

 

 

User Operation Task-Type

The User Operation Task allows you to interrupt a pipeline for manual tasks. In this sample pipeline we used it to enable a manual testing phase after the automated integration tests are successfully completed. At this point you can use the newly set up project to do explorative tests or – in my case – do a demo on the Open Service Broker CAS implementation.

After you are done with the manual task, you can approve it in Code Stream. Once approved, the next step in the pipeline will get executed. In this case it is another REST call to „delete project“. We are tearing down the whole test setup to avoid any leftovers.

Integrations

 

 

GIT Webhook

Pipelines can be triggered in multiple ways. In this case, we configured a GIT webhook starting the build for each commit in the attached GIT repository and branch.

For other integrations I will just give the lists of task types you can use already in the latest version of vRAC:

  • Artifactory
  • Bamboo
  • Blueprint
  • CI
  • Condition
  • Custom
  • Jenkins
  • Kubernetes
  • PCF
  • Pipeline
  • POLL
  • PowerShell
  • REST
  • SSH
  • TFS
  • UserOperation
  • vRO

Please checkout the official documentation in case you want to read more.

Overall Impression

My first contact with Code Stream was with the first versions of Houdini, an addon that allowed to manage the vRA and vRO 7.x artefacts. This aspect of internal asset management is in the latest version completely uncovered. You can deploy blueprints, yes, but only in the same project holding the Code Stream pipeline. This was not meeting our requirements outlined in this blog post.

On the other hand, Code Stream is very open and comes with a lot of adapters to commonly used build tools. In my experience, developers tend to use their preferred tool chain and it can be very hard to have a standardization in a large enterprise. And maybe it is better to let the developer decide and change as freely as possible. In the end you will find yourself in the situation, where you need to integrate such heterogenous build system in an overarching build chain to put all the pieces together. That’s where I see Code Stream is making a very convincing offer – especially with the on-prem version: if you have dependencies to inhouse systems and do not want to risk to expose credentials in a public cloud, the on-prem vRealize Automation Suite 2019 could be your choice. This is the same decision you have to take with all SaaS build systems anyway.

VMworld Barcelona

If you’re attending don’t forget to register for the Breakout Session HBO3559BE. We will demo this functionality live on stage. And stay tuned: Next blog post will cover the Open Service Broker for vRealize Automation Cloud.