This post was originally published on The New Stack.
Container security is obviously a multi-layered affair. Also, many of the layers you need to secure and monitor exist outside containers themselves.The end result is the whole stack is secured, including registries and orchestrators. One critical layer consists of the host operating system and the kernel that powers it.
In this article, I take a look at how to secure the container host, with a focus on kernel-level security.
I should also point out that I will focus on Linux container hosts in particular. (Sorry, Windows container fans — if you run containers on Windows, host security is also important, but given that Windows is much less customizable than Linux, there is not as much you can do from the host perspective to harden a Windows container host system).
Your Kernel and Your Containers
Containers, of course, do not offer complete isolation between the host and the container application. Instead, containers “share” the kernel and other host resources with the host system, as well as each other. Unlike virtual machines, containers do not run their own kernel.
From a security perspective, we can make a few important observations via this model:
- Compromising the OS will allow all the containers to be compromised as well;
- We can apply host-based controls and security policies individually on each container;
- Container escapes that happen via bugs in application code can bypass the engine and access the host OS and the kernel that controls all the other applications.
Based on that, we should carefully devise a security strategy for the host OS environment and the kernel that sits between the containers to decrease the impact of privilege escalation attacks. This process involves lots of trial and error and time to consult the documentation.
Before we embark on modifying the kernel for hosting containers, it’s best if we work in a secure sandbox so that we can rollback our changes in case of a bad configuration. We don’t want to corrupt our own kernel or wipe data. One recommendation is to use VirtualBox and a minimal Linux distro like Alpine. Minimal distributions are useful because the less you have running, the smaller your potential attack surface (although, keep in mind that even small distributions have security issues from time to time, as happened recently with Alpine). Once you load it, make sure you create a snapshot so that you can revert back later.
Best Practices for Securing the Kernel
So let’s see how we can harden the kernel by following those simple rules:
- Get the latest kernel: Have the kernel updated to the latest version as soon as you create the host. Although the kernel itself is a stable piece of software, there are many vulnerabilities related to containers that get found and fixed. Plus, some bugs are known to last long enough before resolution. By keeping the kernel updated, we keep those classes of risk on hold. Checking the current kernel version is as simple as executing:
$ uname -a
Linux tserver 4.15.0-48-generic …
As of May 2019, the latest stable Kernel version is 5.0.13, so we can upgrade it by running:
$ sudo apt-get dist-upgrade
- Remove the root user and use only SSH authentication: These two rules should be part of every new server deployment. We don’t need to expose our hosts to super privileges of the root user for when a container escapes isolation. Additionally, passwords for SSH authentication are insecure by design. We can disable them in the ssh.config by setting the following line:
Make sure you configure SSH-based authentication first, however, so you are able to log in with your SSH keys.
- Run container security tools like docker-bench-security: There are several trusted tools that perform automated scans on the machine and give you reports about best practices for securing containers in production. For example, with Docker, we have the docker-bench-security tool, and you can run it like so:
$ docker run -it --net host --pid host --userns host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /etc:/etc \
-v /usr/bin/docker-containerd:/usr/bin/docker-containerd \
-v /usr/bin/docker-runc:/usr/bin/docker-runc \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
--label docker_bench_security \
Pay careful attention to the Host Configuration section, as it can mention several improvements that can be applied to secure it for container usage.
- Don’t run –privileged containers
There are lots of security profiles and controls such as AppArmor, Seccomp and others that we can enforce in running containers. But if we run them in –privileged mode, then we are giving them superpowers. These powers are harmful enough to affect the hosting OS. We could wipe the host disk space or install modules from inside the container, or do other unsafe things (but this is clearly very insecure and discouraged);
- Load minimal kernel modules: Kernel modules are plugins that are dynamically loaded into the kernel without rebooting. They make the kernel extensible and offer various services. To list all the kernel modules, you need to run:
Not all modules are useful in a containerized environment as they expose services that may be exploitable. The containers that are deployed have the same list of modules from that kernel, but they cannot install new ones without privileged access. It’s important to keep only the necessary kernel modules, especially the ones that are responsible for enforcing security roles and permissions (such as AppArmor or Tomoyo).
- Kernel-hardening patches: You can take advantage of specialized frameworks that harden the Linux kernel and offer extra-secure controls with some backwards compatibility tradeoffs, such as grsecurity and SELinux. There are a few caveats here, however. These frameworks work in different ways, and each one has its own “opinion” about how best to harden the kernel. A comparison of kernel-hardening frameworks is beyond the scope of this article, but do some research to decide which one will work best for you, and apply it;
- Keep yourself updated: You need to keep an eye on new approaches and technologies that allow enforcement of strict isolation and control between the host and the container layer — for instance the nabla-containers project or Google’s gVisor project where the kernel services run in a sandbox. There is more than one road to follow.
One Final Word
No amount of tweaking or customization at the host level will guarantee that your containers are safe from attack. Monitoring your environments for security issues is also critical. But host security is a good place to start. For more tips on this topic, check out Twistlock’s host protection guide.
- Container Security
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
One Chapter Ends, Another BeginsRead the Blog
The Greatest Security Risks Lurking in Your CI/CD PipelineRead the Blog
Cloud Platform Radar: Powerful Cloud Asset IdentificationRead the Blog
Securing Serverless Functions: Visibility with Serverless RadarRead the Blog
Enhanced Security Capabilities for Windows Hosts and ContainersRead the Blog