Tales from Continuous Integration and Continuous Delivery for VMware vRealize Suite

We have accepted the challenge to build a multi-enterprise multi-tenant cloud infrastructure. We will be working in large distributed teams in an agile way and establishing DevOps culture right from the start. Having chosen VMware’s vRealize Suite we checked out the whole toolbox including Code Stream and the Management Pack for DevOps IT (aka. Houdini) and found a lot of promising approaches.

Inspiration@VMworld

This track from VMworld Las Vegas shows how converged blueprints can be treated as pure infrastructure as code and continuously be deployed on any target systems:

DEVOP7674 – vRA, API, CI, Oh My!!!
(Please check the session recordings from VMworld)

The demos look quite good and contain some interesting propositions but do not explicitly cover an integrated solution for version control, no proposition for vRealize Orchestrator code and you can hear Ned, our head of technical orchestration, asking the first questions at the end of the track addressing the most obvious pain points for the vRealize toolset: lightweight development environment and unit tests on the developer’s laptop. Ned left the VMworld Las Vegas with a list of requirements that are not met and started a project to enable a seamless integration into eclipse with local debugging capabilities build on top of the VCO-CLI – he will be invited to introduce his work on this blog.

The development using the out of the box vRealize Orchestrator client is at least a start but with the complete lack of a usable integration with a version control system it doesn’t really merit the „I“ in IDE. The JavaScript workflow and action language cannot be properly debugged (only between tasks but not within an workflow step) and it is hardly possible to mock dependencies to external system so that it always ends up in full integration testing instead of having the possibility to do proper unit testing.

In another track we’ve learned how Fannie Mae is handling vRA and vRO artefacts in a multi-tenant environment.

MGT8499 – Moving to Infrastructure as Code: How Fannie Mae Is Managing vRealize Suite Artifacts with Code Stream
(Please check the session recordings from VMworld)

Trent TeSelle (@tteselle) did a very good job at Fannie Mae and his speech at VMworld was very inspiring. I met him at the VMworld in Barcelona and could ask him a lot of additional questions on his experiences with a real world application of a large scale. They are handling, however, only the vRA and vRO artefacts with this tool chain. As we are planning to have additional cloud native services that do not run directly on vRA or vRO, we will need a CI/CD pipeline around Code Stream and Houdini.

Working with partners

Our development partners showed us how they are usually working on vRO workflows:

expand-to-folder By expanding a vRO package into a folder on the local file system a maven project structure is created. These maven projects can be checked in into GIT and can be combined with the usual dependency mechanisms that maven offers (artefact-id, group-id, version). They selectively checked in some files (usually the one containing the javascript code) into GIT on the CLI and could do file compares on the web frontend of GIT (pull requests). vRO stores these artefacts in XML format so the javascript can be found in a CDATA tag what can be disturbing if you expect to see the clean code snippets. vRO has an internal versioning system but by using the maven project structures you will be responsible to adjust the version numbers manually (maybe there is a plugin that takes care of this?).

 

Our development partners are testing the workflows manually because it is not possible to mock dependencies in the workflows and testing boils down to do full integration tests with the necessity to set up to a test baseline and tear down all constructed objects (VMs, NSX objects) to free resources. In addition, such integration tests should run in fairly good isolated environments in order not to interfere with other work load (parallel tests or even productive workload).

CI/CD concept & requirements

To summarize our requirements:

cicd-concept
 

  • We want to have everything software defined -> infrastructure as code
  • We will have our code and build pipelines under version control
  • We will have build, packaging, testing and deployment as automated as possible
  • We will continuously integrate, test, deliver and deploy in any target environment

We have chosen concourse.ci for an overall build orchestration and pipelining platform and are still evaluating other tools for the ci/cd-chain.

CI/CD concept: Concourse.ci triggering Houdini

houdini

In the first stage (Integration) we’re triggering Houdini via the vRA REST API only to fetch and publish content because of the lack of VCS integration. We’re planning to build and package the vRA and vRO artifacts with the maven build tools. Once successfully integrated the binary artifacts can be deployed to the target systems with Houdini – the customization to the target environment (environment specific properties) has to be developed within concourse.ci.

We consider handling these content types within our CICD.

vRA: (source is the Fannie Mae Talk)

  • Blueprint
  • Software
  • Build Profiles
  • Property Definitions
  • Resource Actions
  • Dynamic Types
  • Icons
  • Event Broker Subscriptions

vRO:

  • Workflows
  • Actions
  • Configuration elements
  • Packages

In addtion we want to have the code for microservices for custom extensions on top of vRA together with the build and deploy scripts fully integrated in an overall build pipeline..

Sprint Goals

In this sprint we have set up the tool chain on a lab stack in order to be prepared to start the first sprints.

cicd-process

Flow for vRO artefacts:

  1. Expand packages to directory and import to topic specific GitHub branch
  2. Checkout from GIT into Concourse.ci build pipelines
  3. Build vRO packages and check for dependencies
  4. Deploy to test environment (for master branch to CICD master environment)
  5. Perform integration test workflows
  6. If all tests are successful deploy to integration environment
  7. Capture from integration environment (Houdini)
  8. Deploy to all target environments as needed

Step 1 to 5 can be done on a specific topic branch on the dedicated topic environment

Flow for vRA artefacts:

  1. Export Blueprints (and other content) using REST to disc and import to topic specific GitHub branch
  2. Checkout from GIT into Concourse.ci build pipelines
  3. Build vRA packages and check for dependencies
  4. Deploy to test environment (for master branch to CICD master environment)
  5. Perform integration test workflows
  6. If all tests are successful deploy to integration environment
  7. Capture from integration environment (Houdini)
  8. Deploy to all target environments as needed

Step 1 to 5 can be done on a specific topic branch on the dedicated topic environment

Flow for cloud native artefacts:

No change to existing build procedures. GIT workflow with push2cloud for deployment.

Goals achieved

    • Export Code from Concourse as a Maven project and commit to Git
    • CI Round-trip on Concourse.ci
      • Build code with Maven
      • Package code as *.package
      • Deploy *.package to central Repository (Maven Nexus)
      • Deploy *.package to CI-vRO
      • Trigger Integration-Tests on CI-vRO

Next steps

Next we want to check out Houdini (Management Pack for IT DevOps 2.2) that we received recently. We already have learned that Artifactory will be removed with 2.2, the vRA and vRO files are now stored as flat files and not in the compressed packages as before but still without a native support of GIT version control.

So, let’s stay foolish, hungry and prepared for change. And, yes, stay tuned if you’re interested into further tales of the continuous struggle for the brave new world of DevOps with VMware vRealize Suite. We will keep you updated on our experiences on this blog and want to share how we are developing in large distributed teams with a full fledged CI/CD pipeline.

Credits to Christoph, Manuel, Philipp, Johannes and Marco for having done a good job!