This technical deep dive highlights key capabilities released as part of Twistlock 19.03. To learn more about what’s included with Twistlock 19.03, check out our full release blog post.

Securing your environment does not stop at just the container or container runtime level. Rather, properly securing your resources requires a more holistic approach. One aspect of this is to ensure visibility of activities across your environment — from your VMs, to your containers, your container runtime, and even your orchestrator. In Twistlock 19.03, we expand this visibility to the Kubernetes engine itself by ingesting logs from the kube-apiserver into Twistlock and expanding your vision into the orchestrator level.

Visibility into the actions in your environments

Twistlock has always given users visibility into the actions occurring in their environment. With the ability to consume Docker logs, host logs, and other information about your cloud resources, Twistlock makes it simple to ingest this information into Console and ensure the information gets routed to the right people through integrations with Slack, Jira, email, and your SIEM tools. Twistlock 19.03, the newest release of Twistlock, expands these capabilities to include the powerful ability to consume Kubernetes audit events and filter this using our custom rules mechanism.

K8s audit events deep dive

An improved implementation of k8s audit events was introduced in k8s v1.11 and provides a log of requests and responses to kube-apiserver. Since almost all cluster management tasks are done through the api server, the audit log is a way to track the changes made to your cluster.
Some examples of this logging include:

  • Creating/destroying pods, services, deployments, daemonsets, etc.
  • Creating/updating/removing config maps or secrets
  • Attempts to subscribe to changes to any endpoint

Staring in Kubernetes version 1.13, you can configure dynamic audit webhook backends. This allows you to forward these logs to Twistlock Console.

Configuring Kubernetes API server

To configure this, you start by editing the configuration of your Kubernetes API server found at /etc/kubernetes/manifests/kube-apiserver.yaml. You must add three new settings to the end of your -command list.

/etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
  containers:
  - command:
    - kube-apiserver
    - --authorization-mode=Node,RBAC
    {...}
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --audit-dynamic-configuration
    - --feature-gates=DynamicAuditing=true
    - --runtime-config=auditregistration.k8s.io/v1alpha1=true
...

Changes to the kube-apiserver manifest causes the api server to restart. Once the service is restarting, check to make sure that your flags have been set. To do this, run ps -ef | grep kube-apiserver and verify that your flags appear in the kube-apiserver process. If you have issues, you can review the kube-apiserver logs at /var/log/containers and search for files beginning with “kube-apiserver”.

Gathering prerequisites for the connection

Next, connect to Twistlock Console and navigate to to Defend > Access > Kubernetes. There, set “Kubernetes auditing” to Enabled, then click on “Go To Settings” as shown in the screenshots below:

This may be a bit of a spoiler for one of our next steps, but copy the URL from the field labeled “Add the following URL to your audit sync configuration”.

Next, find the TLS Certificate that you’re using for access to your console. We need to convert this certificate to base64.

openssl base64 -in server-cert.pem -out base64-output -A

Copy the contents of the base64-output file as we will need this in our next step as well.

Creating the AuditSink connection

We need to create the AuditSink yaml description and apply that to our Kubernetes cluster. Start from the auditsink.yaml file below, and make a few replacements.

  • Replace “WEBHOOK-ADDRESS” with the address you copied from your Twistlock console
  • Replace CA-BUNDLE with your base64-output of your certificate

auditsink.yaml
apiVersion: auditregistration.k8s.io/v1alpha1
kind: AuditSink
metadata:
  name: twistlock-sink
spec:
  policy:
    level: RequestResponse
    stages:
    - ResponseComplete
  webhook:
    throttle:
      qps: 10
      burst: 15
    clientConfig:
      url: "WEBHOOK-ADDRESS"
      caBundle: CA-BUNDLE

Now apply the auditsink.yaml to the Kubernetes cluster.

kubectl apply -f auditsink.yaml

If successful, we should see the following output:

auditsink.auditregistration.k8s.io/twistlock-sink created

Creating a Kubernetes audit rule

Now, back in Console, we must create a rule to “catch” the logs that we want to see in our Console. Since every action for the Kubernetes API creates a log, we don’t want to create a very general rule. Instead, we want to keep our logs specific and meaningful to target only the things we find important and actionable.

Twistlock 19.03 ships with a number of pre-made rules for common scenarios, and more can be delivered via Intelligence Stream updates, allowing you to draw from a library of some of the best and most useful Kubernetes audit rules. The rules are also fully customizable allowing you to capture only the things you want to see in your environment. Let’s set up a simple rule so we can alert when any privileged pods are created in our environment:

jpath("stage") = "ResponseComplete" and jpath("objectRef.resource") = "pods" and jpath("verb") = "create" and jpath("requestObject.spec.containers") contains "privileged:true"

Creating a privileged pod and viewing the audit

To test out our rule, we will return to our Kubernetes cluster to create a simple privileged pod. To do this, I’m creating the pirv-pod.yaml file to start up an nginx pod with `privileged: true`.

priv-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    securityContext:
      privileged: true

Now I apply my priv-pod.yaml to start my privileged pod:

kubectl apply -f priv-pod.yaml

Now, within the next few minutes, we should be able to see logs appear in Twistlock Console! Browse to Monitor > Events and select the option for Kubernetes Audits:

In the screenshot above, we can see a wealth of information about the events on our Kubernetes cluster, including the entire event blob. The rules you create in your environment can use any of the aspects of the event blob to tailor your results down the exact events that you want to monitor, allowing your logs to be relevant and actionable.

Conclusion

Here we were able to configure the API log monitoring in Kubernetes, and make a simple rule to track the creation of privileged pods. Between the powerful rules that ship in the Twistlock Console, and the rules you write yourself, you can consume logs from your orchestrator to gain visibility into this key aspect of your cloud native environment. Keep an eye out on the Twistlock Blog to see more information on new audit rules that will become available in the Intelligence Stream in the future.

← Back to All Posts Next Post →