Camunda Platform 8 on Tanzu (TKGI)

Camunda Platform 8

This article introduces Camunda for Kubernetes with Istio configuration, a PoC contributed to Camunda community hub. I have already shared a preview of the most important piece to secure the process engine. This article will try to explain motivation, functionality, architecture and list some learnings.

Why Camunda?

We see a growing interest in Camunda for process automation, use cases like the integration with UI Path (robotic process automation) have gained attraction. Camunda is already available as SaaS but many of our customers are bound by regulations to keep their data in Switzerland. This is the main criteria to produce the service on prem or with a national cloud provider. The migration of large scale projects from Camunda 7 to Camunda 8 is another important topic but was not in scope for our proof of concept.

Camunda’s Hyperautomation Tech Stack

Target Stack

We are operating Tanzu Kubernetes Grid Integrated to produce managed Kubernetes clusters for our customers. As we can not rely on SaaS like Tanzu Mission Control, we were forced to build a similar collection of open source tools to provide the shared services you will see in any Kubernetes cluster anyway, we called it “Helix”. Helix defines Istio deployment paradigms like dedicated gateways, micro-segmentation, packs all tools for Istio (kiali, jaeger, zipkin,…) together with OPA Gatekeeper, Prometheus, Grafana, Velero and multiple other tools in a collection of shared services.

Deliveries from Camunda

This kustomize/kpt project is based on the official helm charts according to the docs and the enterprise access to the webmodeler ( user and pw needed ).

I have found that the docker-compose repo seems to be the most up-to-date source of truth. The manifests for the webmodeler parts could be converted by kompose.

The quality of the helm chart was pretty good and it was – as usual – only prepared for standard ingress controllers. I had to add the service accounts to the deployments and statefulsets and some RBAC/PSP to comply to our PSP policies.  I started with this setup and found myself swiftly in a mess with TLS definitions for self-signed CAs. When I finally found out, that the delivered workflow engine had no access protection whatsoever, I decided to drop the ingress controller support in favor of Istio. Istio allows you to define JWT rules declaratively.


The main difference to the “combined ingress” approach proposed by Camunda is that in our Helix cluster configuration Istio is preferable. I replaced the ingress definition from the helm chart with Istio gatways and virtual services. A dedicated istio ingress gateway checks ports 80 and 443 for the wildcard domain and virtual services map hostnames and ports or protocols to the Kubernetes backend services.

Big picture – green: shared services

Additional shared services are needed to implement the full PoC functionality:

Reasonable Deployment Modules for Feature Toggling

One of my main goals was to extend the helm chart from camunda with everything needed to do an in-depth PoC. As I authored the project for our helix environment, I followed the approach proposed from its maintainers: kustomize and kpt. For this project I use kpt to edit the kustomize project to set environment specific variables or toggle features.

Camunda Connectors

As Camunda delegates most of its work in tasks to “Connectors” running remotely (from the perspective of the Kubernetes cluster), this is literally endless. Camunda offers out-of-the-box connectors to the hyperscalers and an SDK to build your own. I narrowed the scope down to the handling of secrets, to verify this I implemented the Camunda SecretHandler as an adaptor to Hashicorp Vault. In addition, I implemented a Kafka Inbound Connector to demonstrate, how easy it is, to start a process over the libraries, and a Kafka Outbound Connector for sending or waiting on messages. And, last but not least, the main piece for our PoC: a custom connector for UIPath. We have noticed there is already a community project started, we are keen to contribute.

Camunda Connectors can be placed anywhere where needed

Camunda Connectors are the extension points and a valuable concept to delegate or control work from or to highly restricted zones. They need access to the zeebe cluster the workflow is running on, either by polling or by long running GETs to minimize latency. Over the zeebe API the process definitions and instances can be fully controlled. Historized data, however, needs to be fetched over elasticsearch – a community project for a Kafka exporter looks promising.

Connectors mostly consume local services.

As Connectors can be distributed outside the kubernetes cluster as well, the runtime environment of the connectors define what services are accessible and efficient. In both use cases for the PoC (Vault and Kafka) it seemed obvious that the connectors are consuming only local services.

Lessons learned

First of all: Camunda Platform is great software with convincing concepts implemented in a modern way. The team has done an awesome job driving an active community, I got very responsive and reliable support and I am convinced that the revamped  architecture (zeebe) will gain a lot of traction in the industry. The notion of an “immutable event stream” is already a requirement from an audit perspective, now consequently separated with elasticsearch as datastore. This is a breaking change for Camunda 7 installations – and if you mess around directly with the DB, it’s gone now 😉. If you rely on “external workers” anyway, you should not have problems with upgrading to Camunda 8.

I love the spring boot architecture of the surrounding microservides, the UI looks good, “operate” can support you with detailed informations on processes, “optimize” helps to detect issues, “tasklist” is the entry point for human tasks for the logged in user and “identity” is basically a wrapper around keycloak (what makes it a hard dependency).

Camunda connectors is again an elegant spring boot solution providing a webserver as a base to provide the runtime environment to plugin connectors. A connector itself is a java jar that can be simply put in the class path to get picked. Webhooks are already implemented and the zeebe library makes it intuitive to start and control processes. Most of the tasks should be doable with a two-liner. Checkout my testing of the basic principles.

The webmodeler is available for licensed users only, I assume, that’s the reason why it was not contained in the official helm chart – at least at the time I used it for the initial configuration. Again: I love this implementation, a cool and modern frontend fit for the job – a node.js client-side app. I was a little bit irritated by the choice of the websockets implementation: a php backend is used. Besides introducing yet another technology it was cumbersome because it needed additional configuration in Istio to fully enable the websockets protocol.

The main challenge was that zeebe has no protection out-of-the-box in the “self-managed”* version. I invested quite a lot of time configuring standard ingress controllers like Contour or NGNIX according to the Helm chart. After I managed to tame the beasts which were the self-signed CAs, I didn’t find a way to reliably configure OAuth for these ingress controllers. So I moved forward to the target model with Istio. After an initial struggle getting gRPC correctly treated, Istio worked like a charm and solved the most important problem: you can’t publish a completely unprotected process engine. And you get completely rid of the TLS requirements for self-signed CAs: under Istio you should let your services run unencrypted and let Istio do the rest (mTLS).

Camunda Cloud offers a self service registration and clusters on demand. This makes sense, because you cannot protect entities in zeebe by RBAC. The cluster is the only security boundary and you will need an instance per security context – even on premise. You’d better get prepared to deliver instances on demand. My project should enable this soon. Camunda Identity has a setup configuration to create a realm in keycloak – a user domain in the identity access manager. I plan to externalize this config to allow multiple installations on the same cluster very easily.

*) A self-hosted Camunda Platform 8 alternative, offering everything you need to download, configure, and work with each component.

Next Steps

I will continue to contribute to the Camunda community hub and keep you updated.

Maybe we’ll meet at the Camunda Days – I’ll be in Zürich on March 30.


Many thanks to Jan Höppner, Thomas Gütt from Camunda for your persistent support.

And thanks to the VMware vExpert team for awarding me again!

Be the first to comment

Leave a Reply

Your email address will not be published.