to Navigate Operationalizing DevSecOps
to Navigate Operationalizing DevSecOps
Modern enterprises are implementing both the tools and the cultural changes required to embrace a DevSecOps mindset and approach. Often this means leveraging a container security platform because this allows for them to draw from the experience, knowledge, and resources of the platform/service, while still freeing up engineering staff to accomplish the company’s core mission.
This list of tips has been aggregated from discussions with a large group of developers, devops practitioners, and security teams.
Do you know where your containers come from? Are your developers downloading container images and libraries from unknown and potentially harmful sources? Do the containers use third party library code that is obsolete or vulnerable?
Establish trusted sources of container images as a policy. Ensure that you have a runtime gate checker that only permits trusted images and containers to run on your hosts.
Container images are rarely built from scratch; they’re typically built on some base image, which is itself built on top of other base images – consider a Node.js application built on top of Apache, which could be on top of an Ubuntu base image.
A developer typically grabs a base image and other layers from public third party sources. These images and libraries may contain obsolete or vulnerable code, thereby putting your application at risk. Additionally, many existing vulnerability scanning tools may not work with containers.
Use a vulnerability management tool that can parse container image formats and detect the existence of vulnerable libraries inside a container image before it progresses to production. Employ an enforcement function in runtime to ensure that vulnerable images and containers are not deployed.
The Center for Internet Security (CIS) has published a Docker benchmark, which covers configuration and hardening guidelines for containers, images, and hosts that run containers.
For example, one of the best practices is to remove non-essential services from the production host to mitigate potential risks. Another example is restricting kernel capabilities within containers. A recommended practice is to audit kernel capabilities associated with your containers and remove the unnecessary ones.
It is imperative that you not only follow but also enforce the hardening practices to mitigate runtime risks. Avoid manual verifications and checks, as that can be labor intensive.
Automate Docker bench hardening checks and enforcement. Make those practices part of your essential development and deployment processes prior to live production.
If you use containers, you are probably using Continuous Integration (CI) or Continuous Delivery (CD) pipeline tools. Popular ones include Jenkins, CircleCI and Codefresh.
The best place to detect and fix security vulnerabilities is during development and as part of the CI/CD work flow. A possible implementation would be for CI/CD tool to initiate security scanning whenever a new image is constructed and consumes the results in the native CI/CD console.
Also important is that the integration should be able to fail builds and force bug fixes before the image progresses in the pipeline.
Ensure your container vulnerability-scanning tool can integrate easily with your CI/CD pipeline tools for both vulnerability detection and also build management.
When using containers like Docker in production, many organizations have a hard time enforcing role-based policies in the environment. For starters, Docker requires root privilege for accessing any Docker command, hence rendering enterprise role-based policies ineffective.
Docker has since amended its privilege management framework with more fine-grained access control capabilities — Twistlock actually contributed that part of the code. To take advantage of that, you will need to write an authorization plugin and add the plugin to the Docker daemon configuration or leverage a platform like Twistlock that provides this capability.
Get familiar with the authorization plugin, but also consider using an access control tool that offers integration with enterprise directories, fine-grained access management, and also logging and auditing for all your Docker daemon accesses.
Containers should be minimal, declarative, and immutable. Those characteristics mean that it is actually possible to build a reliable baseline for the containerized application. Using this baseline in runtime you can detect anomalies and active threats much more accurately than with monolithic applications that change frequently.
However, building the baseline is not a trivial task, especially when you have many containers spinning up and down dynamically. Automating the baselining process, the detection actions, as well as the enforcement, is the only way to scale up runtime security.
Establish automatic behavior profile generation and anomaly detection functions for containers. Ensure that the tool you use does not rely on manual actions, and can correlate, manage, and automate the different controls to give you central visibility, detection, and response.
Containers are flexible and easy to use, but that also means it’s easy to end up with many instances of containers and images, some of which may be obsolete and risky. We recommend that you perform regular audits to identify nonessential containers and images and eliminate them from your systems.
Docker provides commands for you to determine the number of containers and images on a particular host, and you can remove them if sprawling starts to consume too much system resources and threatens system consistency.
Automate image and container audit and management workflows to eliminate unused images and containers from your registries and hosts.
Twistlock helps hundreds of organizations world wide operationalize DevSecOps with a central platform that delivers visibility and security across cloud native environments - at every stage of the application lifecycle.
See Twistlock in Action