It’s 2019. Do you know where your applications are?*
Probably not. Today’s applications don’t live in static, isolated server environments that are easy to identify and track. They live in the cloud, they live in shared server instances, they move around between hosts. They could even be serverless functions that don’t really “exist” permanently at all.
When it comes to security, the inability to know exactly where applications live presents a challenge. It means that focusing on application security alone (which addresses only security flaws within the application) no longer works.
Today, security engineers need also to think about the overall context of the application in order to secure it, and focus on developing security solutions that work no matter where the application lives.
Let’s take a look at how to do that by discussing major facets of software security that engineers need to think about today in order to keep applications secure no matter where they live.
*For the uninitiated, there’s a joke in there that involves an old PSA familiar to Americans of a certain vintage.
Container adoption is on the rise! It is the most popular form of deployment, as it’s easy and flexible. But as container adoption increases, so should concerns about best practices for securing containers.
With containers, there are so many software layers: an orchestrator, a container registry, images, and most likely, several different microservices within your application. All these services also need to communicate with each other in a secure but flexible way. And containers need to move from development to staging to production environments frequently. All these parameters lead to security engineers needing to think bottom-up about their security methods.
A few red flags to be aware of when thinking about container security are:
- Image vulnerabilities and compliance concerns
A container is nothing but a large piece of code. And large pieces of code have many moving parts, each prone to vulnerabilities. Before deploying images, the best practice is to scan for both vulnerabilities and compliance issues. This helps in surfacing embedded secrets or malware.
- Securing the registry
A container registry is a secure place for storing and distributing application images. Since the registry is centralized and directly communicates with your application server, it is of utmost importance to keep it secure. Any vulnerabilities in your registry can be cause for compromising your application. Continuously monitoring and restricting access to the registry is a good start.
- Container runtime protection
Monitoring containers is not an easy task. This is because traditional security tools were not designed to monitor running containers. What security teams need to do instead is establish a baseline for a normal, secure state of function for the container. This allows security teams to monitor the containers using network-level security tools instead.
By its very nature, Serverless is meant to be more secure — primarily because it removes a number of components that teams need to maintain in an otherwise traditional software environment. However, this means that security teams need to think differently about serverless security, not less.
Some immediate concerns to address in a serverless environment are:
- Heavier dependency on external resources
Serverless workloads rely heavily on external services to perform even the most basic function. This requires your security team to be updated on additional technologies.
- Denial of Service
While this is not unique to serverless architectures, Denial of Service attacks (or DoS) cause even more damage as there is no easy way to monitor logs and kill specific services that lead to the DoS attack.
- Managing Secrets
In a serverless environment, managing secrets can be a bit unintuitive, and it might be more convenient for developers to hard-code authentication keys or passwords right in the serverless functions. However, it is always best practice to add auth keys to vaults provided by the serverless service itself.
VM technologies like VM vSphere or Hyper-V run most of the popular cloud services today. Though VMs are traditionally built to be secure from the ground up, there are situations that security teams should be aware of.
Most tools available on the market to secure VMs are cleverly packaged legacy solutions that do not meet the needs of the cloud. Today’s organizations seek a comprehensive platform that moves beyond legacy protection and is optimized for the statelessness and automation of the cloud.
Core security requirements to protect cloud workloads include:
- Hardening the host OS
The VM is only as secure as the OS that it runs upon. Intruders can create havoc that compromises the entire stack if they gain access to the host OS. It is of absolute importance that the host OS be scanned for vulnerabilities and hardened based on CIS benchmarks on a regular basis.
- File Integrity Monitoring (FIM)
Improper management of user access control could lead to a major vulnerability. Though security teams take great caution in granting users access, there’s always the need to account for human error. Users that should have been removed could still have access, or some users with lower access privileges might be wrongly declared in the access control list. With FIM, teams can monitor the host file system for specific changes to directories by specific users and can be altered whenever a malicious request is detected.
CaaS, or Containers-as-a-Service, is exactly what it sounds like — containers delivered as a service! Using CaaS means DevOps teams spend less time in setup and management of a containers. Instead, you just log in and start deploying. Some example CaaS providers are ECS on Amazon Cloud and AKS on Azure.
Though teams spend less time in setup, there needs to be effort put into monitoring the server throughout its lifetime. Some simple security measures that can be considered are:
- Access Control systems
As your solution grows, you will have multiple teams managing different facets of your application architecture. With CaaS, Access Control systems are pre-built (like IAM on AWS), and they give you very deep permission control. Use this to restrict network access or database access for specific users.
- Keeping software updated
CaaS services typically make software updates very easy. Most often, these services are on auto-update mode. Updates contain fixes for new security vulnerabilities and overall performance improvement. Sometimes these updates might cause internal services to malfunction, so it’s always a good idea to run a thorough check of all services after updating your CaaS.
Just as it would be easy to keep yourself secure if you never left your house, it would be easy to secure applications if they all lived forever on isolated servers, like they did in the olden days. But the olden days are over, and you can no longer have certainty about where an application lives. That’s why it’s critical to think far beyond application security. Addressing security problems within applications is still important, but you also need to manage security considerations within the various types of environments and architectures that host modern applications.
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
How to Lock Down the Kernel to Secure the Container HostRead the Blog
One Chapter Ends, Another BeginsRead the Blog
The Greatest Security Risks Lurking in Your CI/CD PipelineRead the Blog
Cloud Platform Radar: Powerful Cloud Asset IdentificationRead the Blog
Securing Serverless Functions: Visibility with Serverless RadarRead the Blog