One of the main reasons why containers have become so popular is that they offer isolation between the host system and containers in terms of processes, permissions and resources. This includes data, which can be strictly isolated from a container to keep a clear separation of points of concern. Security controls can then be applied in order to protect the host data against illegal access or tampering.

What does this look like in practice? This article explains by discussing important data security considerations for designing and developing containerized applications. It also evaluates some recommended solutions for mitigating risks that are present in those environments, and how to be proactive in ensuring that we are covered in the event of any data breaches.

1. Secure Data in Motion

Data in transit is data moving from one location to another. Locations can be local on the same host or through the network. Whenever data is moving, essential data protection controls must be in place to ensure confidentiality and integrity.

Effective security perimeters must be installed on top of the container networking plane that will monitor, and if necessary, block unauthorized movement of data. Notifications must be sent to administrators for actionable incidents as any data loss could be very risky for businesses, especially if it involves the loss of Personally Identifiable Information (PII). Proper transport security encryption must be established before any sort of internal or external communication takes place.

On top of that, there is a need to protect applications from common attacks that target other private resources. In such cases, web application firewalls have to be configured where required, and they need to be cloud-native aware as containers often have ephemeral networking topologies.

2. Secure Data at Rest

In a containerized application, there are two ways that the data can be stored. It can be mounted in an external file system such as in volumes, tempfs or bind-mounts, or they can be stored inside the container using a specialized storage driver. Choosing the right storage type is very important, as each type has its pros and cons.

In either case, the same security principles apply all the time. A proper in-depth defense plan is needed to provide multiple layers of protection, and suitable filesystem-level encryption with a modern crypto suite must be used in order to protect data from prying eyes. Some advanced platforms like Kubernetes allow you to enforce role-based access controls that are useful in forming permissions on a need-to-know basis.

3. Lock the Filesystem

One other option is to lock the filesystem from writes (or, in other words, set up a read-only file system). This strategy provides added security by mitigating the risk of accidental changes to sensitive files, or flooding of the host filesystem. This is particularly useful in situations where your application is not intended to modify or write anything on the filesystem, excluding any volumes created. It’s safer to prevent the container from performing any writes, and to only provide a small temporary space for operating system utilities that need to have a writable location.

4. Keep Sensitive Information Secret

Separate the process of building your containers from the process of customizing them. For example, when defining the steps required to build the container, you should never copy keys or secrets or anything that is private as it could be accidentally committed to a version control system. It’s best to keep your configuration to a minimum and add the sensitive tokens and keys at run-time. Then you only need to make sure you provision those secrets through a secure medium via a secrets management platform like Vault.

5. Limit Kernel Capabilities

By default, the container usually runs with a limited set of capabilities as a means of access control. Applications with different security requirements may not demand some capabilities to be enabled by default within the container. For example, you can disable the mounting of file systems. In the case of a privilege escalation when an intruder logs in to a container, it would not be possible to mount an external file system and exfiltrate sensitive information.

Careful management of those capabilities, however, can become a very detailed process. To ease the application of those sets of requirements, DevOps engineers can apply security profiles as provided by certain Kernel security modules like AppArmor, SELinux, GRSEC, or another appropriate hardening system. Those modules can apply additional safety checks and are independent of any container-specific technology, allowing them to be more universal.

6. Reduce Risk with a Compliance Framework

Managing compliance in a containerized environment means that several key challenges have to be completed within strict timeframes. There is a constant need to ensure the ongoing confidentiality, integrity, availability, and resilience of processing systems and services. That alone necessitates a security framework that sits on top of current legislation and security incidents. In other words, you need an established and up-to-date framework for compliance and risk management that will support your operations.

With Twistlock, you can begin managing the enforcement of over 200 compliance benchmarks, including the Center for Internet Security’s benchmarks for Docker and Kubernetes. The Runtime Defense system secures your entire environment: network, file system, processes, and system calls for potential threats, using advanced machine learning algorithms.

Interested in learning more? Get an evaluation from Twistlock, or learn more about the platform here.

Related Container Security Posts:

  • Container Basics Whitepaper
  • 8 Powerful Tips to Improve Container Runtime Security
  • Integrating Container Security with Google Cloud Platform: Twistlock and Cloud SCC
  • ← Back to All Posts Next Post →