Container as a Service (CaaS) – Part II

In the second part of my CaaS series, I want to share our thoughts on Red Hat’s OpenShift Platform we are considering as a candidate for Container as a service on the Enterprise Service Cloud. This is a sneak preview on our evaluation work and does not provide any commitment for an offered service.

Providing OpenShift Container Platform at Scale

Preconditions and Requirements

Please read more on the Swisscom context in the first part of my blog series.

OpenShift Container Platform

 


OpenShift
is the container platform created by Red Hat. You can consume it as SaaS offering from Red Hat, OpenShift Dedicated, and there is an open source community edition as well as supported on-prem version available as well, OpenShift Enterprise. We evaluated and implemented the current release of the OpenShift Container Platform 3.9. We found a mature, reliable and easily installable product that has already a considerable market share and very loyal fans. Red Hat is a long-time partner to Swisscom, a very important committer in the Kubernetes community and worked hard and very successfully on extending the features of k8s for its own product. These extensions seem to be very valuable for their users but make it hard to switch between standard k8s and OpenShift. Furthermore, OpenShift has introduced a proprietary CLI tool, instead of kubectl all Kubernetes commands are implemented together with proprietary extensions (project management) in the “oc” CLI-utility. It is possible, however, to use the standard „kubectl“ utility without the support of the extensions like „ImageStreams“ and others – but if you want to use wrapper tools that make use of kubectl this should work. This lock-in and deviation from the standard is widely tolerated because of the following advantages (according to customer feedbacks):

  • Supports CaaS and PaaS at the same time
  • Deployment Lifecycle-Hooks (pre, mid, post)
  • Visualization (Web-Console)
  • Bundled and licensed Docker images (like JBoss EAP)
  • Properly integrated internal docker registry (access control)
  • Projects on top of dedicated Kubernetes namespaces (access control)
  • SDN capabilities

Key findings

OpenShift can easily be installed within 30 minutes by running an ansible playbook provided that all VMs are previously set up according to the host preparation documentation. Unlike BOSH, OpenShift does not provide an automated setup of the IaaS components from scratch.

OpenShift offers a sophisticated multi-tenancy based on k8s namespaces that will meet all requirements within a single enterprise. For a service provider grade multi-tenancy, we recommend a multi-instance approach (1 complete OpenShift cluster per customer) particularly if your customers have requirements like full network isolation.

The product has strong advantages in a very consequent implementation on top of Kubernetes without giving up the PaaS capabilities, very mature high availability setup with multi-master support and an attractive web UI for developers that do not prefer the CLI. On the other hand it does not support fully automated installation of clusters with the needed IaaS components. As a service provider, this is forcing us to deliver this capability particularly because we need to provide a dedicated cluster per customer.

OCP@Swisscom

Based on our findings, we decided to build the IaaS automation in our Swisscom Enterprise Service Cloud where we are leveraging vRealize Automation Blueprints to provision the OpenShift cluster machines and use vRealize Orchestrator Workflows to collect data and kick off the initial installation of the OpenShift cluster. The installation workflow uses the official ansible playbooks provided by Red Hat to dynamically configure the cluster during the installation. By using the official playbooks, we assure that OpenShift is configured using the Red Hat best practices.

To provide persistent storage, all worker nodes of the cluster are also configured to provide distributed storage via GlusterFS. This distributed storage can then be used within OpenShift using Container-Native storage (CNS). We faced the same security threats as with any other k8s implementation when we wanted to use the vSphere CPI directly to provide block storage: vCenter credentials have to be used in configuration files to grant access to vSphere datastores. This is not acceptable for unmanaged clusters (root PW for VMs is handed out to the customers) and will have to be tested in depth for managed clusters.

 


Our Enterprise Service Cloud allows us to use dedicated uplink topologies per customer to meet the requirement of completely isolated customer networks in a virtual private cloud environment. Our NSX proxy implementation offers a tenanted NSX environment in which our customers can have distributed firewalls to enable micro segmentation on VM level. Together with the fully automated provisioning of OCP clusters that can be triggered in our portal, over vRA API (or in future maybe by our OSBAPI), the customer is free to set up and tear down clusters on demand and at scale. The product will define T-shirt sizes (multi-master, number of nodes, specification of VMs, storage size) and we will offer day 2 actions to scale in and scale out. In the managed version we’ll take care of all maintenance and upgrade tasks as well.

vRA Blueprint

We are using a multi-machine blueprint to deploy different types of VMs with name generators per node type. We tried to work with the vRealize Software Components but did not find a way to perform the task that is prerequisite for the ansible playbook: generating the inventory file with all VMs with their node types, fqdns and IPs. That’s why this blueprint contains only the VM specs. We had to fall back to vRealize Automation Event Broker System based on which we are defining subscriptions and work with workflows on some key events to collect the necessary data and trigger the ansible playbook at the end of the provisioning process. In order to keep state between the events, we used the o11n-plugin-cache, a distributed cache implementation based on hazelcast, that enables us to save the results during the provisioning process of a multi machine blueprint. This breaks the paradigm of event based programming but we didn’t find another way to implement this even with some commercial ansible plugins for vRA.

It was very hard to convince hardcore CLI developers used to flat files and a good and direct GIT support to use vRealize Automation Suite to work with the vRA designer and the vRO client. We are collaborating closely with VMware to improve this massively and to add the necessary features to provide a complete versioned configuration management. Please checkout my blog post “A Wish List for Configuration Management with the vRealize Suitein which I’m defining the requirements for a really developer friendly tooling to handle complex projects.

vRO Workflow

These are the workflows we are using in the first version:

Event Subscription to collect the VM information (IP,Name)
Setup environment and start playbook
Security Tags handler

We will continue to look for different approaches to simplify this further.

I have to repeat my criticism on the vRO development tools again: it is very cumbersome to import/export the workflows in an completely unreadable XML format manually. We would need a text file based approach that is controlled only from GIT in order to allow the developers to use their favorite editor and let the GIT commits trigger the updates like in other best of breed development tool chains. I have been working, however, in the design partnership program with VMware and am aware of the improvements that vRA 8 will bring soon. For big DevOps teams it is very crucial to have this inversion of control (GIT commits are triggering builds and deployments) instead of vRA Suite being in control of everything. vRA has to be freely combinable with existing CICD build chains and should not force any approach to the development teams. Developers always want their own pets and don’t like to constantly work around this kind of imposing a tooling – I’m a developer as well and I can understand these feelings easily.

Red Hat ansible playbook

 


Using the input variables the customer defined, we dynamically create an Ansible inventory file that is used by all the following playbooks. There are multiple playbooks involved when creating a new OpenShift cluster:

Before we get into the Red Hat playbooks, we prepare the underlying Red Hat Enterprise Linux machines using our own preparation playbook.
In this playbook, lots of stuff already happens. For example, we attach the correct RPM repositories and add the necessary certificates for the environment.
Additionally, we make the installation more robust by providing sane defaults for some settings of the operating system.

After the host preparation playbooks, the playbooks provided by Red Hat get executed.

First, using the „prerequisites.yml“ playbook, we check that the hosts have been successfully prepared and all prerequisites needed for OpenShift have been completed.

Then, the installation happens. Using the „deploy_cluster.yml“ playbook, the installation gets triggered.
This may take some time, as there are lots of tasks being executed.

After the installation, we perform some finishing tasks to make the new cluster even more robust.

This includes adding „OPTIONS“ to our Docker configuration, enabling „localQuota“ on the nodes and setting up the OpenShift pruning mechanisms.

When the last playbook finishes, OpenShift is up and running. Now it waits for the customer to deploy some applications!

Portal View

This is the catalog view in the Swisscom Portal where our customers can pick the OpenShift catalog item to start the provisioning process.

On the first form of the catalog item the user can define the most important parameters and customize the cluster according to his needs. He is able to select from the networks that are available in his security context (tenant/business group) and can set NSX security tags to define his own micro-segmentation implementation.

A customer can change the VM specs for each node type within previously defined boundaries (component profiles).

This is the specification screen for a worker node with GlusterFS already predefined.

After the provisioning request was submitted a user can see the provisioning progress in the request tab.

This is how a provisioned cluster looks in the portal. Here all day 2 actions are offered according to the chosen product variant (managed or unmanaged).

These blueprints will be consumed outside of the hosting vRA in other cloud stacks of ours according to our “One Cloud” vision, that is to build a homogenous offering over all stacks following the “produce one, consume anywhere” paradigm

This is the view on the firewall rules in the portal. Enterprise Service Cloud provides a fully tenanted NSX self-service access powered by our custom NSX proxy. Our customers are enabled to define a strong micro-segmentation according to their security requirements. The screenshot shows both a the same time: the first rule overriding any micro segmentation together with a fine granular strong micro-segmentation approach for our OpenShift cluster. In production you would opt for the first rule if you do not want micro segmentation or for the other rules if you need it.

In our OpenShift blueprint we are working with NSX security tags that can be freely assigned per node type. These NSX security tags are then referenced as members of security groups that are used to define the firewall rules as depicted above.

NSX-V is handling firewall security only on VM level. For network security on the container networks we rely on the capabilities of OpenShift SDN and are currently preparing an NSX-T offering.

Next Steps

Currently we are hardening the clusters to make them ready to meet the requirements of our most demanding customers in the banking sector. We need to perform very intensive testing in terms of performance, scalability, storage features, high availability and security.

Credits

Sascha Spreitzer, Red Hat, Infrastructure & Cloud Architect, https://spreitzer.ch/ for persistent and very valuable support in our project. Thanks a bunch, and let’s continue exactly like that!

Daniele Vennemann, Red Hat, Key Account Manager, for your enthusiasm on the Red Hat products. I’m convinced that we keep on offering best in breed class services to our customers.

Simon Krenger, Swisscom, DevOps Engineer, for your superb start in the CaaS team and the professional coolness you have brought into the team. Thanks a lot for the material you gave me for putting together this blog post.

Mateusz Kowalski, Swisscom, DevOps Engineer, for fighting and overcoming all obstacles in a big and mostly overloaded team. I’m quite sure you’re not the one waiting too long to get things done. Thanks for your amazing screenshots and vRO scripts.

And last but not least, Kristjan Perlaska, Swisscom, DevOps Engineer, for the freewheeling approach to a completely new tooling while left alone with the nice hints to the rtfm method.