We’re proud to announce that our Twistlock Operator has been certified in the Red Hat Operator Framework Early Access Program and listed on operatorhub.io, in the Red Hat Container Catalog, and in the OpenShift Marketplace. We’re excited about the innovation coming out of the OpenShift, Operators, and broader Red Hat Kubernetes community. In this blog I’ll give a brief overview of how the Operator Framework and Operator SDK made it easy to build an Operator to manage the Twistlock Console.
Operators are not brand new, but they’ve been getting a lot of buzz recently, especially since the Red Hat acquisition of CoreOS and the launch of the Operator Framework which Red Hat and many others have heartily embraced as a way to take what was essentially a software pattern, and make it concrete through standards, tools, and a community around it. What’s an Operator anyway? I’ll take a whimsical stab (that seems so obvious and yet imprecise that someone else must have said it already).
An Operator is cruise control for a Kubernetes application
Twistlock Operator on operatorhub.io
Operators allow you to turn operational knowledge about how to manage the tricky business of stateful applications (since they’re the hard ones) into software, declare how things should be (running on v2 instead of v1), and have the Operator make it all happen automatically. Think install/upgrade, backup/restore, scale up/down.
Getting started with Operators
If you’re just getting started with Operators, you may already be familiar with native Kubernetes resources and declarative YAML, or even with Helm charts as a Kubernetes “package format”.
Twistlock supports all of these technologies, and in this blog, I’d like to walk you through how we built our Twistlock Console Operator. I’ll treat it like a journey (road trip?) from basic native Kubernetes resources, to Helm charts, to an Operator that installs Twistlock Console, because that’s just how it happened.
From basic resources
Kubernetes provides the native resources we’ve come to know and love, like replication controllers, services, daemonsets, etc. We know that if we declare that a resource or resources should be present, Kubernetes will do its best to ensure that your cluster looks like what you’ve declared.
$ kubectl apply -f twistlock_console.yaml
If everything is already the way you want it, Kubernetes just checks and confirms that, and changes nothing. If your desired state is different than the current state, Kubernetes will do its best to change or create resources to get you where you want to be. This is process is sometimes referred to as a reconciliation loop.
With Twistlock’s twistcli binary, it’s really easy to create a single YAML file (as featured above) with all the resources for the Console in it (or one for a Defender Daemonset) with a single command and your license token.
$ twistcli console export kubernetes --service-type LoadBalancer
But some folks wanted to standardize on Helm charts to package up their Kubernetes apps and make them easier to customize and upgrade, so let’s move down that path.
To Helm charts
A typical Kubernetes app, like Twistlock Console, has a dozen or more different resources that have to be in the correct state for things to function. Usually the resources have very specific data values that must be set for things to work correctly: URLs, paths, ports, replica counts, etc. A lot of folks have found Helm Charts a valuable way to package up a bunch of Kubernetes resources and the values they need to fill in or override at deployment or upgrade time. This allows the dozen very specific YAML files to turn into a single Helm chart with a dozen templated YAML files and a values file to fill in the blanks. Abstracting from a lot of specific stuff, to one generalized thing.
To use Helm, our workflow deviates a little from the standard Kubernetes path, we need to install a Helm client locally and Tiller, the Helm server, on our cluster. Then we can run Helm commands to combine our data values and templates and apply them on our cluster.
$ helm install -f values.yaml ./twistlock-console
With the latest Twistlock release, we provided support for Helm charts to deploy both Console and Defenders.
To custom resources and Operators
Our journey will end today back with
kubectl, but with a twist. Operators employ the ideas of automation, abstraction, and the native Kubernetes reconciliation loop. You take operational know-how (install/upgrade, backup/restore, scale up/down) and package it as software that is automatically run at the right time (like cruise control, remember?). Since our current goal is to install the Twistlock Console on Kubernetes when we ask for it, we just want to declare that we want a custom resource for a TwistlockConsole that acts like a native Kubernetes resource.
$ kubectl apply -f twistlock_console_custom_resource.yaml
Luckily, you can make any custom resources you want in Kubernetes: a Kubernetes Custom Resource Definition (CRD) is just what we need. Luckily all of this got templated by the Operator SDK.
The CRD for the new TwistlockConsole resource will define the what, but we still need an implementation (the how), and an actor to make it all happen at the right time. The Operator is that actor that contains the implementation of how to stand up the Twistlock Console and watches for when a TwistlockConsole custom resource is declared.
If you look at the structure of the Twistlock Console Operator repo, you’ll notice that it centers around the Console Helm chart. If you’re wondering why Helm is still around since we moved down the path to Operators, this is no mistake. Even though many Operators thus far have been implemented in Golang, with the Operator Framework folks wanted options to bring more users into the community.
Options for Operators with the Operator SDK
Since “not everyone is a Golang developer” and not everyone is an Ansible developer either, the Operator Framework folks have options for creating a new Operator with the Operator SDK: Golang, Ansible, and Helm.
Since we already had a Helm chart to install the Twistlock Console (that was built on our know-how of the required Kubernetes resources in our first YAML file), we were able to leverage our existing Helm chart with the Operator SDK to quickly build our Twistlock Console operator. No need to install Helm or Tiller either, as the Operator takes care of it all. If we expand the functionality of the Operator in the future, we might eventually need to implement our Operator in Ansible or Golang.
Running the Operator and getting a Console
If you want to deploy our Operator on your Kubernetes cluster and ask it for a Twistlock Console, you need a Twistlock license token and then just follow the instructions in the README of our repo. You may need to elevate to cluster-admin in order to complete the setup in some cases. There will also be a video walkthrough of installing and using the Operator available on the Cloud Native Security Podcast channel.
Hopefully you’ve got a little more insight into Operators, Helm, native Kubernetes resources, and how they can fit together. If you’re trying to decide which path is right for you, as always, your Twistlock technical team will be happy to help. We’ve been down some of these roads before.
- Twistlock Platform
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
Beyond App Security: Securing Applications No Matter Where They LiveRead the Blog
Surveying the Container Orchestration LandscapeRead the Blog
Building the Right Toolbox for a Successful DevSecOps CareerRead the Blog
BOD 19-02: DHS Vulnerability Remediation RequirementsRead the Blog
How Cloud Native Security is Adapting to New Hybrid Reality for EnterprisesRead the Blog