Like a new universe, the cloud-native ecosystem has many technologies and projects quickly spinning off and expanding from the initial core of containers. An especially intense area of innovation is in workload deployment and management technologies. While Kubernetes has become the industry standard general purpose container orchestrator, other technologies like serverless attempt abstract complexity associated with managing hardware and operating systems. The differences between these cloud native technologies are often small and nuanced which makes it challenging to understand the benefits and tradeoffs between them. One of the most common questions we hear from customers is around how these different technologies address their scenarios and how to choose between them.
A useful way to think of cloud native technologies is as a continuum spanning from VMs to containers to serverless. On one end are traditional virtual machines (VMs) operated as stateful entities, as we’ve done for over a decade now. On the other are completely stateless, serverless apps that are effectively just bundles of app code without any packaged accompanying operating system (OS) dependencies. In between are things like Docker, AWS’s new Fargate service, Container as a Service platforms and other technologies that try to provide a different balance between compatibility and isolation on one hand, and agility and density on the other. That balance is the reason for such diversity in the ecosystem. Each technology tries to place the fulcrum at a different point, but the ends of the spectrum are consistent: One end prioritizes familiarity and separation while the other trades off some of those characteristics for increased abstraction and less deployment effort.
There’s a place for all these technologies – they’re different tools with different advantages and tradeoffs and we typically see customers using at least a few of them simultaneously. That heterogeneity is unlikely to change as organizations bring in increasingly more critical workloads into their cloud native stacks, especially those with deep legacy roots.
As we consider all of these technologies, keep in mind the Cloud Native Computing Foundation’s Charter, which defines cloud native systems as having the following properties:
(a) Container packaged. Running applications and processes in software containers as an isolated unit of application deployment, and as a mechanism to achieve high levels of resource isolation. Improves overall developer experience, fosters code and component reuse and simplifies operations for cloud native applications.
(b) Dynamically managed. Actively scheduled and actively managed by a central orchestrating process. Radically improves machine efficiency and resource utilization while reducing the cost associated with maintenance and operations.
(c) Microservices oriented. Loosely coupled with dependencies explicitly described (e.g. through service endpoints). Significantly increases the overall agility and maintainability of applications. The foundation will shape the evolution of the technology to advance the state of the art for application management, and to make the technology ubiquitous and easily available through reliable interfaces.
While it may be surprising to see VMs discussed in the context of cloud native, the reality is that the vast majority of the world’s workloads today run ‘directly’ (non-containerized) in VMs. Most organizations we work with don’t see VMs as a legacy platform to eliminate, nor simply as a dumb host on which to run containers. Rather, they acknowledge that many of their apps have not yet been containerized and that the traditional VM is a still a critical deployment model for them. While a VM not hosting containers doesn’t meet all three attributes of a cloud native system, it nevertheless can be operated dynamically and can run microservices.
VMs provide the greatest levels of isolation, compatibility, and control in the continuum and are suitable for running nearly any type of workload. Examples of VM technologies include VMware’s vSphere, Microsoft’s Hyper-V and the instances provided by virtually every IaaS cloud provider, such as Amazon’s EC2. VMs are differentiated from ‘thin VMs’ to their right on the continuum because they’re often operated in a stateful manner with little separation between OS, app and data.
Less a distinct technology than a different operating methodology, ‘thin’ VMs are typically the same underlying technology as VMs, but deployed and run in a much more stateless manner. Thin VMs are typically deployed through automation with no human involvement, are operated as fleets rather than individual entities and prioritize separation of OS, app and data. Whereas a VM may store app data on the OS volume, a thin VM would store all data on a separate volume that could easily be reattached to another instance. While thin VMs also lack the container attribute of a cloud native system, they typically have a stronger emphasis on dynamic management than traditional VMs. Whereas a VM may be set up and configured by a human operator, a thin VM would typically be deployed from a standard image, using automation tools like Puppet, Chef or Ansible, with no human involvement.
Thin VMs are differentiated from VMs to their left on the continuum by the intentional focus on data separation, automation and disposability of any given instance. They’re differentiated from VM integrated containers to their right on the spectrum by a lack of a container runtime. Thin VMs have apps installed directly on their OS file system and executed directly by the host OS kernel without any intermediary runtime.
For some organizations, especially large enterprises, containers provide an attractive app deployment and operational approach but lack sufficient isolation to mix workloads of varying sensitivity levels. Recently discovered hardware flaws like Meltdown and Spectre aside, VMs provide a much stronger degree of isolation but at the cost of increased complexity and management burden. VM-integrated containers, like Kata containers and VMware’s vSphere Integrated Containers, seek to accomplish this by providing a blend of a developer-friendly API and abstraction of app from OS, while hiding the underlying complexities of compatibility and security isolation within the hypervisor.
Basically, these technologies seek to provide VMs without users having to know they’re VMs or having to manage them. Instead, users execute typical container commands like docker run and the underlying platform automatically and invisibly creates a new VM, starts a container runtime within it and executes the command. The end result is that the user has started a container in a separate operating system instance, isolated from all others by a hypervisor. In general, these VM-integrated containers typically run a single container (or set of closely related containers akin to a pod in Kubernetes) within a single VM. VM-integrated containers possess all three cloud native system attributes and typically don’t even provide manual configuration as an optional deployment approach.
VM-integrated containers are differentiated from thin VMs to their left because they’re explicitly designed to solely run containers and tightly integrate VM provisioning with container runtime actions. They’re differentiated from pure containers to their right on the continuum by the mapping of a single container per OS instance and the integrated workflow used to instantiate both a new VM and the container it hosts via a singular, container-centric flow.
Follow us on Twitter
Keep up to date with the latest news from TwistlockLabs and TwistlockTeam.
Twistlock Releases Serverless Runtime Defense
A few months ago, we wrote a piece on “The Continuum of Cloud Native...
Why DevSecOps is No Longer Optional
DevSecOps has been a hot topic within tech conversations for a few yea...
Better Together: Announcing The Twistlock Advantage Program
It’s been about three years since we exited stealth with the first g...
My Security Toolset Today Vs 10 Years Ago
It can be easy to forget how sophisticated IT security tools are today...
How to crash the Linux Kernel with a CDROM interaction – CVE-2018-11506
I’ve recently discovered and reported a buffer overflow vulnerabilit...