Containers deliver all three cloud native technology system characteristics and provide a balanced set of capabilities and tradeoffs across the continuum. Popularized and best known by the Docker project, containers have existed in various forms for many years, and have their roots in technologies like Solaris Zones and BSD Jails. While Docker is a well-known brand, other vendors are adopting its underlying technologies of runc and containerd to create similar but separate solutions.
Containers balance separation (though not as strong as VMs), excellent compatibility with existing apps and a high degree of operational control with good density potential and easy integration into software development flows. Containers can be complex to operate, primarily due to the their broad configurability and the wide variety of choice they present to operational teams. Depending on these choices, containers can be either completely stateless, dynamic, and isolated or highly intermingled with the host operating system and stateful, or anywhere in between. This degree of choice is both the greatest strength and the great weakness of containers. In response, the market has created systems to their right on the continuum (such as serverless) to both make them easier to manage at scale and to abstract out some of their complexity by reducing some configurability.
Containers are differentiated from VM-integrated containers to their left by not using a strict 1:1 mapping of container to VM, nor wrapping the provisioning of the underlying host operating system into the container deployment flow. They’re differentiated from Container-as-a-Service platforms to their right by requiring users to be responsible for deployment and operation of all the underlying infrastructure, including not just hardware and VMs but also the maintenance of the host operating systems within each VM.
Containers as a Service
As containers grew in popularity and diversity of use cases, orchestrators like Kubernetes (and its derivatives like OpenShift), Mesos and Docker Swarm became increasingly important to deploy and operate containers at scale. While abstracting much of the complexity required to deploy and operate large numbers of microservices, composed of many containers and running across many hosts, these orchestrators themselves can be complex to set up and maintain. Additionally, these orchestrators are focused on the container runtime, and do little to assist with the deployment and management of underlying hosts. While sophisticated organizations often use technologies like thin VMs wrapped in automation tooling to address this, even these approaches do not fully unburden the organization from managing the underlying compute, storage and network hardware. CaaS platforms provide all three cloud native technology characteristics by default and, while assembled from many more generic components, are highly optimized for container workloads.
Since major public cloud IaaS providers already have extensive investments in lower level automation and deployment, many have chosen to leverage this advantage to build complete platforms for running containers that strive to eliminate management of the underlying hardware and VMs from users. These Container as a Service platforms include Google Container Engine, Azure Kubernetes Service and Amazon’s EC2 Container Service. These solutions combine the container deployment and management capabilities of an orchestrator with their own platform-specific APIs to create and manage VMs. This integration allows users to more easily provision capacity without the need to manage the underlying hardware or virtualization layer. Some of these platforms, such as Google Container Engine, even use thin VMs running container-focused operating systems, like Container-Optimized OS or CoreOS, to further reduce the need to manage the host operating system.
CaaS platforms are differentiated from containers on their left by providing a more comprehensive set of capabilities that abstract the complexities involved with hardware and VM provisioning. They’re differentiated from on-demand containers to their right by typically still enabling users to directly manage the underlying VMs and host OS. For example, in most CaaS deployments, users can SSH directly to a node and run arbitrary tools as a root user to aid in diagnostics or to customize the host OS.
While CaaS platforms simplify the deployment and operation of containers at scale, they still provide users with the ability to manage the underlying host OS and VMs. For some organizations, this flexibility is highly desirable, but in other use cases it can be an unneeded distraction. Especially for developers, the ability to simply run a container, without any knowledge or configuration of the underlying hosts or VMs can increase development efficiency and agility.
On-demand containers are a set of technologies designed to trade off some of the compatibility and control of CaaS platforms for lessened complexity and ease of deployment. On-demand container platforms include Amazon’s Fargate and Azure Container Instances. On these platforms, users may not have any ability to directly access the host OS and must exclusively use the platform interfaces to deploy and manage their container workloads. These platforms provide all three cloud native technology attributes and arguably even require them; it’s typically impractical to not build apps for them as microservices, and the environment can only be managed dynamically and deployed as containers.
On-demand containers are differentiated from CaaS platforms to their left by the lack of support for direct control of the host OS and VMs and the requirement that typical management occurs through platform-specific interfaces. They’re differentiated from serverless on their right because on-demand containers still run normal container images that could be executed on any other container platform. For example, the same image that a user may run directly in a container on their desktop can be run unchanged on a CaaS platform or in an on-demand container. The consistency of an image format as a globally portable package for apps, including all their underlying OS-level dependencies, is a key difference from serverless environments.
While on-demand containers greatly reduce the ‘surface area’ exposed to end users and, thus, the complexity associated with managing them, some users prefer an even simpler way to deploy their apps. Serverless is a class of technologies designed to allow developers to provide only their app code to a service which then instantiates the rest of the stack below it automatically. In serverless apps, the developer only uploads the app package itself, without a full container image or any OS components. The platform dynamically packages it into an image, runs the image in a container and (if needed) instantiates the underlying host OS, VM and hardware required to run them. In a serverless model, users make the most dramatic trade offs of compatibility and control for the simplest and most efficient deployment and management experience.
Examples of serverless environments include Amazon’s Lambda and Azure Functions. Arguably, many PaaS platforms such as Pivotal Cloud Foundry are also effectively serverless even if they have not historically been marketed as such. While on the surface, serverless may appear to lack the container-specific, cloud native attribute, containers are extensively used in the underlying implementations, even if those implementations are not exposed to end users directly. Serverless is differentiated from on-demand containers to their left by the complete inability to interact with the underlying host and container runtime, often to the extent of not even having visibility into the software that it runs.
At Twistlock, we are laser focused on delivery best in breed protection across the full application lifecycle, and throughout the ever changing cloud native ecosystem; our mission is to give customers cloud native cybersecurity from top to bottom. Some vendors are already experimenting with combinations of the approaches mention in this article — like Amazon’s EC2 Kubernetes Service being integrated with their Fargate technology to effectively create an on demand CaaS platform. Additionally, as users gain more experience running these platforms, the available options may naturally narrow down so only the top tool choices remain on the market.
The trend of ‘software eating the world’ is pushing nearly every organization to invest in software as a competitive differentiator for their business. This is driving great demand for platforms that enable developer agility and operational scale, which has led to a wide variety of choice for cloud native topologies. Each brings its own set of advantages and trade offs — but the decision is not which one to use, but rather which ones. Few organizations will find a single option that’s a great fit for all their needs, and instead will find several options, each providing advantages for different workloads and use cases as they change and grow.
- Cloud Native
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
Announcing Our Series C FundingRead the Blog
Real Time View of Your Cloud Native Applications: Radar v3Read the Blog
AWS Fargate Security: Runtime Defense with Twistlock 2.5Read the Blog
Cloud Native Forensics: Security Incident Response in Twistlock 2.5Read the Blog
Announcing Twistlock 2.5: GA Release NotesRead the Blog