Dockerfiles are the blueprints of your container environment. In order to keep your Docker environment secure, efficient and effective, you should start by creating the best Dockerfiles possible.
After all, if you were building a house, you’d want to make sure you were starting with solid blueprints. If your blueprints fail to include a bathroom, or place your kitchen in the attic, your house is not going to be very nice to live in, no matter how well you execute on the construction.
Dockerfiles function in a similar way. Any mistakes you make in your Dockerfiles will continually hamper your container environment. By exercising forethought, however, you can use Dockerfiles to help make your container environment the best it can be.
This article identifies best practices for writing Dockerfiles. It emphasizes security in particular, but also discusses other important considerations, such as usability and efficiency.
Minimize, Minimize, Minimize
Perhaps the most important strategy that should factor into your approach to creating Dockerfiles is deciding to take a minimalist approach to defining what you include in a container.
A minimalist approach means that you include in your Dockerfile only what is strictly necessary for the associated service or application to do its job, and nothing more.
Unnecessary instructions or configuration variables inside a Dockerfile are a problem because they may:
- Make your Dockerfile harder for you and others to read and work with.
- Increase the time that it takes to build your container images.
- Increase the size of your container images, which in turn increases the time it takes for people to download them.
- Add bloat to your Docker environment by starting processes that consume data and/or resources unnecessarily.
- Create a wider attack surface by introducing more potential security vulnerabilities into your environment.
For all of these reasons, avoid the temptation to install packages or run commands within your Dockerfile that are not essential for your containers. Consider CMD instruction and every apt-get argument very carefully. (If you don’t need an SSH server, for example, don’t install it.)
Your Dockerfiles are not disposable chunks of text that you write once and then never look at again. They’re crucial files that you’ll ideally be able to modify and extend on a continuous basis, as your container environment evolves and your needs change.
For that reason, keep readability in mind when writing the files. Break up long command lines using backslashes. Indent when it makes sense to do so. Insert comments (which you can do by prefacing a line with a hash) where helpful to remind yourself or explain to others what a particular chunk of code does.
Writing Dockerfiles is probably not the most fun part of your job, and you may be tempted to throw them together quickly. But spending a little extra time to optimize readability will do much to ensure that your Dockerfiles are easy for you to keep reusing, and for other people on your team to work with, as long as necessary.
Use a Secure and Efficient Parent Image
In most cases, you’ll include in your Dockerfile a parent image on which your container image will be based. The parent image is the foundation for everything that runs inside your container, so choose it wisely.
You want a base image that includes as much of the functionality that you require as possible, in order to minimize the number of extra packages that need to be installed to create your container image. At the same time, however, you want to avoid using a parent image that contains unnecessary packages or services, which (as noted above) add bloat and potential security vulnerabilities to your containers.
It’s easier to add extra packages to a Dockerfile than to remove them from a parent image, so if you can’t find an image that provides the ideal amount of functionality for your container, start with one that has a minimal footprint, such as an Alpine Linux image.
You can also use Docker’s “scratch” image, which essentially means you create an entire image from the ground up. This is a good way to avoid bloat, but it means you’ll have to perform extra steps to create your image, as well as take extra care for security.
Use EXPOSE Wisely
From a security standpoint, one of the most important Dockerfile directives is EXPOSE, which defines which ports should be open at runtime.
(The operative word in that last sentence is should. Whether a given port is actually open in a running container depends on how the -p or -P flags, which control the publishing of ports, are set when a container is started. But because port publication information is usually based on the EXPOSE instructions, setting this information correctly is crucial.
If you’re reading this article, you probably already understand why it’s bad from a security perspective to open ports unnecessarily. Still, I note how important the EXPOSE directive is because you may not think about port configuration until you are actually starting your Docker environment. Yet, in reality, port configuration begins in the Dockerfile, so think carefully about which ports in your application need to be open, and define them accordingly. Leave the others closed.
Your Dockerfiles are one of the most important sets of configuration data for your Docker environment. Writing a good Dockerfile does much to ensure that your container environment will be secure, efficient, and easy to maintain over the long term.
- Container Security
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
Cloud Platform Discovery: Identifying All Your Cloud Native ServicesRead the Blog
Using Twistlock to Secure Workloads on Pivotal Cloud FoundryRead the Blog
Twistlock, Azure Container Instances, and AKS virtual nodesRead the Blog
Twistlock 18.11 Release NotesRead the Blog
5 Questions to Ask When Choosing a Cloud Native Security Platform for DevOpsRead the Blog