How do you transition from just running containers to running containers securely? Below, I reflect on my personal experience learning to secure Docker containers in all of their complexity.
Although I am an IT professional by trade, I have spent the last three years of my life training to fly as a scientist on a commercial space mission. A few months ago, I took a spacecraft egress course that focused on things like launch aborts, off-nominal reentry, and post-landing recovery operations. I will spare you all of the technical detail, but much of the curriculum focused on the concept of cost versus probability.
For space launch, there are a limited number of recovery assets (such as ships and helicopters) available for use. As such, those resources are usually placed in a location that is based upon the probability of a spacecraft coming down in that area. If a spacecraft is orbiting the earth at the equator, for example, then it would not make sense to stage recovery assets at the North Pole.
As strange as it seems, this same basic concept of risk management can also be applied to container scalability. Every organization has a limited IT security budget, so security spending has to be used to mitigate the most likely risks. At the same time, however, risks can change over time as the IT infrastructure evolves. Perhaps nowhere is this more true than when it comes to containers.
Most organizations probably get started using containers by containerizing a few applications that are used internally. In the case of a small to medium-sized business, using containers in this way probably does not present a huge risk. There is a low degree of exposure for the containerized applications, because the applications are only accessible to trusted employees. Although such a container environment should adhere to basic security best practices, it probably does not make sense to spend a significant portion of your security budget on a tiny container deployment that only hosts a couple of applications.
Container Scalability and Security
As the organization increases the scale of its container infrastructure, however, the risks tend to increase. Generally speaking, the greater a piece of software’s exposure to potential threats, the greater the security risk. Therefore, as an organization begins hosting an increasing number of containerized applications, it will be all the more important for additional security to be put in place. This is especially true for public-facing web applications that are running in containers.
One of the first things that organizations tend to do as the scale of their container infrastructure increases is to create a cluster. Clusters often utilize additional infrastructure components, such as load balancers or shared storage. (The actual requirements vary considerably depending on the platform that is being used.) Any infrastructure-level component that is used by a cluster deserves extra scrutiny in order to make sure that it is configured securely. Likewise, it is important to make sure that the cluster hosts themselves are configured in a secure manner, and are not running any unnecessary processes.
Container Image Management
As the container infrastructure grows, it is also important to be more careful about using trusted container images. Blindly downloading a base image from the Internet poses a huge security risk. As such, it is highly recommended to establish a repository of trusted base images that have been carefully evaluated, and have been verified to comply with the organization’s security requirements. All future containers should be built from the repository of trusted base images.
Some organizations have also found that they can improve security by building multiple container environments in an effort to physically isolate certain workloads from one another. You probably would not want to place a container running a publicly facing web application onto the same host as a container that runs your most sensitive business application. Even though the containers themselves can be logically isolated from one another, one has to consider what might happen if the container host (or cluster) were to be compromised. Using one cluster for highly sensitive internal workloads, and a completely different cluster for publicly facing applications, or for less sensitive workloads, can help you to protect the workloads that really matter.
As the scale of an organization’s container infrastructure increases, and the organization begins to run more and more containers, it becomes increasingly important to ramp up the organization’s container security efforts. When doing so, you should create a formal policy that establishes security protocols for dealing with containers. It is also important to implement automated enforcement mechanisms wherever possible so as to ensure that everyone is adhering to the requirements of the security policy.
Subscribe to our blog for cloud-native security updates, or get in touch for a demo.
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
5 Questions to Ask When Choosing a Cloud Native Security Platform for DevOpsRead the Blog
CVE-2018-1002105: Critical K8s VulnerabilityRead the Blog
Advanced runc Debugging for Fun and ProfitRead the Blog
Introducing Twistlock Support for AWS Lambda LayersRead the Blog
Cloud Native Security Intelligence: Integrating with AWS Security HubRead the Blog