Compliance is one of those things that most of us don’t like to think about (unless you’re an auditor, I suppose), but that you have to think about if you want to protect your business.

That’s especially true in today’s cloud-native age, when infrastructure is more complex and fast-moving than ever.

To help meet compliance challenges in a cloud-native world, here’s a primer on best practices for compliance in the age of the cloud.

What’s Different About Cloud Native Environments

As a Twistlock contributor outlined in a previous post, 4 Sure-Fire Ways to Achieve Compliance with Microservices, compliance can be difficult to achieve in a modern cloud-native environment. There are more components involved then ever, and to complicate matters, there are more bad actors trying to find and exploit vulnerabilities in the endless number of components that seem to be in modern environments. This does give the appearance of an exponentially higher risk profile than a traditional app at a quick glance.

In reality, the risk per component is lower than it would be with a traditional monolithic application. The greatest strength of cloud-native application architectures and the environments that they run in are all those individual components that are fit-for-purpose. If any one component has a security vulnerability discovered, it can be easily updated without needing to retest anything beyond its limited scope. This ability to surgically replace components allows for much more rapid discovery and recovery for any issue that may arise.

Shift-Left Security Scanning (Static Analysis) for Compliance Purposes

One of the beautiful concepts behind cloud-native applications is that they leverage CI and CD to build a container image once and run that same image at every level of their testing pipeline as they are promoted to newer environments.

By extending the automation used to build this container image to do scanning for security, style, and other types of compliance-like licensing, it allows for a repeatable process that is transparent and easy for an auditor to validate, and aligns with multiple compliance requirements that often occur around the artifact that is production-bound—including failing the build if any compliance requirements are violated. An example of a compliance policy would be to require an application to have no dependencies, with critical vulnerabilities listed in CVE.

Depending on the tool, this can be as easy as adding a new step to your CI configuration. This example uses Twistlock to scan a build in Jenkins.

stage('Scan with Twistlock') {
    steps {
        script {
            twistlockScan ca: '', cert: '', compliancePolicy: 'warn', \
                dockerAddress: 'unix:///var/run/docker.sock', \
                ignoreImageBuildTime: true, key: '', logLevel: 'true', \
                policy: 'warn', repository: BUILTIMAGE, \
                requirePackageUpdate: false, tag: VERSION, \
                timeout: 10
        }
    }
}

Private Registries and Repositories for All Artifacts

As part of the build and deployment pipelines in the CI/CD part of the system, after any applications and container images are built, they need to be stored and indexed for later use in a registry. Because many cloud-native applications which are built by organizations to support their business needs are proprietary and often contain trade secrets, they can’t be stored in just any repository. They need to be stored in a safe, secure location which can be scanned regularly for problems from newly discovered vulnerabilities, and that will ensure the integrity of the artifacts as they are reused over time.

The best way to do this for containers is to use a private registry. There are Software as a Service offerings for this purpose, including Amazon ECR and Google Container Registry, but most companies prefer to use a private instance of SonaType Nexus or jFrog Artifactory, which also support additional types of artifacts like Maven and Node.js dependencies.

As an example, if Maven is used as the build framework for a Java application, add a few lines to the ~/.m2/settings.xml file to identify credentials to connect to the private repository:

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
    <servers>
        <server>
            <id>ourNexus</id>
            <username>jenkins</username>
            <password>SomePassword</password>
        </server>
    </servers>
</settings>

Then, in the pom.xml for the individual application modules, a distributionManagement section is needed:

<distributionManagement>
    <repository>
        <id>ourNexus</id>
        <url>http://ip-1-2-3-4.compute.internal:8081/repository/our-releases/</url>
    </repository>
    <snapshotRepository>
        <id>ourNexus</id>
        <url>http://ip-1-2-3-4.compute.internal:8081/repository/our-snapshots/</url>
    </snapshotRepository>
</distributionManagement>

Software-Defined Networking and Compliance Access Control

Within the world of container management in a cloud-native environment, there is a networking layer required. This networking layer should be robust, dedicated, software-defined, and able to be managed by the container platform. Containers in a proper cloud-native environment do not just leverage the default networking available on the compute hosts — It is especially important to provide additional security controls and isolation to traffic that is flowing between services. There are often policies which require only encrypted traffic, so proper network policies can restrict traffic to all but the ports used for encrypted traffic like 443 for HTTPS and 22 for SSH.

There are various options available for software-defined networks that are designed to support containers and cloud environments, including open source options like Open VSwitch, Tigera Calico, and Juniper Contrail.

If you were to use Calico with Kubernetes, then it can be installed using a single command:

curl \
https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/calico.yaml \
-O

Network policies for Calico are created as needed by administrators and applied to new applications as they are deployed.

This is a sample policy that would deny any outgoing (egress) traffic from an application called NGINX:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: advanced-policy-demo
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress

Conclusion

Leveraging the proper mix of available technologies to build and secure cloud-native environments will allow an organization to maintain both environment-wide and component-level security policies which can address the areas of security they do best. There are even products available that can monitor and maintain these security policies and handle enforcement without having to handcraft each policy. These products allow for compliance to be consistently applied and readily available to any auditor that may come along to triple-check compliance with an existing regulation like GDPR, and any new world-changing policy that will come along in the future.

← Back to All Posts Next Post →