“Cloud computing” and cloud deployment have been a catch-all phrase over the past decade to describe anything that’s a shift away from hardware servers. However, the term has become nebulous in recent times with the growing diversity in how many different ways you can leverage the cloud.
We’ve come far from a simplistic separation between on-premises and cloud. Today, it’s about on-premises versus a range of different cloud options. Indeed, the cloud can be a confusing place for newcomers and veterans alike, with new options cropping up every few months, and the landscape always shifting towards the newer and better.
But how do you choose between good, better and best? Let’s compare the various cloud deployment technologies available today and find the common ground and what separates them from each other.
Bare metal in the cloud
A bare metal server in the cloud is the closest alternative to a hardware server. Bare metal cloud delivers the real hardware server experience, but instead of the server being hosted in your own datacenter, it’s in a vendor-provided cloud. The vendor manages the maintenance of the server, and gives you full control over the configuration, and full capacity of the underlying node. While some providers like IBM and Rackspace provide bare metal cloud solutions, they’re still a fringe option with virtual machines (VMs) still accounting for a majority of the cloud marketplace.
Virtual machines in the cloud
A VM attempts to abstract away the underlying hardware server and provide it as a pool of resources that can be shared across multiple virtualized instances. A VM brings greater server density as a number of VMs can be packed onto a single hardware server. It enables greater density and diversity of applications on a single server. The applications are strongly isolated from each other by the hardware hypervisor that is the foundation of every VM. VMs are the most widely used type of cloud computing instance across all major cloud providers, and would have remained unchallenged if not for the rise of containers.
The “tweener” hypervisors
Before we get to containers, let’s cover an in-between cloud solution that unfortunately ended up being the misfit—lightweight or hypervisor-based containers. The idea with this option is to provide the best of both worlds: the agility and minimalist design of a container, along with the hardened robust isolation of a hypervisor (a very light one at that). There was considerable interest in this space as it came right on the heels of the container revolution, and many wondered if this was the perfect balance.
Some of the notable options include Canonical’s LXD, VMware’s vSphere Integrated Containers (VIC), Intel’s Clear Containers and other niche options like Hyper. While these solutions articulated the problem they were trying to solve with clarity, they couldn’t differentiate themselves enough from VMs as they still required an unshared kernel to support the hypervisor in each instance. This still meant paying the hypervisor tax, albeit not as much as with VMs, but a far cry from its more agile cousin—containers.
The world of cloud computing was turned on its head in 2013-14 with the launch of Docker containers. Though containers met with a few initial naysayers, particularly for their security loopholes, the tidal wave of developer adoption made clear that it was only a matter of time until those gaping voids were plugged. Containers became a force to reckon with. Soon, IT began to see the value of containers, and how they enabled the DevOps model in a way that VMs could not.
Over the past few years, organizations of all sizes have been in the process of transitioning their workloads from cloud VMs to containers in the cloud. What really took container adoption to the level of running containerized apps in production was the advent of container orchestration tools. Mesosphere was the first, quickly followed by the launch of Kubernetes, and finally, Docker’s own Swarm. In 2017 it became clear that Kubernetes had ascended to the container orchestration throne, and it is today enjoying industry-wide support and integration with every cloud vendor out there. A range of Containers as a Service (CaaS) solutions have been rebranding themselves as Kubernetes as a Service (KaaS).
There have been attempts to avoid lock-in with Docker the company in the form of alternative container runtime formats like CoreOS’s rkt, and the Open Container Initiative’s CRI-O. However, Docker still holds sway as the leading container runtime, and isn’t likely to be dethroned. That said, the rise of Kubernetes itself is in part a sign of the ecosystem’s aversion to lock-in, as it has voted to move away from Docker’s own orchestrator. This tension between standardization around Docker and support for a diverse ecosystem is a positive sign, and one that points to many more years of growth and maturity for containers as it promises to become the leading cloud deployment solution, taking over the mantle from VMs.
Containers sans servers
With this option, we still remain within the confines of containers. However, the mode of deployment is what changes. As container solutions mature, we now have a new breed of services that can launch containers without requiring you to provision or manage any underlying servers.
AWS Fargate and Azure Container Instances are the frontrunners in this space. While for developers the container experience is uncompromised, from an Ops perspective, this option brings a new level of ease. It lets you specify the number of containers you’d like to launch, how much resources you’d like to provision for each of them, and pretty much hit a “go” button and leave the rest to the cloud platform.
You pay per millisecond for the resources you use and don’t have to worry about optimizing resource utilization, as it’s all done for you behind the scenes. This option is still in its infancy with Azure’s solution being too barebones and not well-integrated with the rest of the platform at the time of this writing. AWS Fargate is more robust as a solution, but it sits in the midst of other competing AWS services like ECS, EKS and one of AWS’ biggest successes in recent years—Lambda.
Serverless is by no means less
Serverless computing presents the most modern approach to cloud application delivery. It is the most hands-off option, and yet, it packs a punch in terms of what you can do with it. AWS Lambda is the leading serverless computing solution on the market today, but there are competing products from the Azure and Google Cloud stable as well (not forgetting Oracle’s recently acquired Fn).
Serverless platforms let you upload your code in the form of functions and leave it up to the system to provision resources to execute the function. Similar to Fargate, you pay per use. In terms of compute power, there’s plenty available, as AWS Lambda itself is used internally by Amazon to power some of its bigger applications. Having the same power that runs Amazon applications available to any startup at a tiny fraction of the cost is a powerful proposition, but it’s made a reality by serverless computing solutions like Lambda.
Beyond extreme-burst compute and memory storage, in terms of integration, Lambda is deeply integrated with the rest of the AWS ecosystem, and is able to leverage most vital AWS services for monitoring like CloudWatch, security services like IAM and storage services like S3 and EBS, and more. One integration that gives Lambda wings is with AWS API Gateway. Together they can enable completely new functionality being added to legacy enterprise applications without having to completely revamp or re-architect the monolith.
Additionally, Lambda is a great option for extreme short-term workloads that last just a few seconds. Rather than spin up new containers just for this purpose, you can use Lambda to run analysis on occasionally ingested data, or offload heavy media processing jobs to Lambda in times of traffic spikes.
One drawback to keep in mind is the lock-in that’s inevitable if you commit deeply to Lambda in the long run. It can be difficult to simply lift-and-shift to a competing option from another provider, as the components, terminology and integrations would be completely different. It would require a complete refactoring from the ground up to achieve the same functionality. That said, serverless computing is here to stay. It brings a lot of power and capability, and yet is the easiest to use of all cloud deployment solutions.
There are numerous cloud deployment technologies available today. The sheer choice can be overwhelming. Yet, there are clear indicators about which option is ideal for what type of workload. Whether you prefer the familiarity of a VM, the agility of a Docker container, or the hands-off approach of serverless computing, none are right or wrong. The choice depends entirely on what workloads you want to run. What’s clear in all this is that no single one of these options is going to be the only solution you’ll ever need—there’s going to be a mix-and-match approach as organizations look to get the most out of the cloud. So, take your pick of cloud deployment technologies, but understand what you’re buying into beforehand.
Like this subject? Our CTO, John Morello, has recently written a full white paper on this topic, called The Continuum of Cloud-Native Topologies, which we recommend if this subject interests you! It’s also broken out into a two part blog series, here and here.
- Cloud Native
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
Baking Compliance in your CI/CD PipelineRead the Blog
Serverless Security Suggestions: Tips for Keeping Serverless Functions SecureRead the Blog
Why a Common Security Toolset is Essential for DevSecOpsRead the Blog
Putting the “Ops” in DevSecOps: Why It’s Hard and How to Do ItRead the Blog
Why the Point Solution Mindset for IT Security is DeadRead the Blog