AWS Fargate is one of the newest services in the world of containers. Announced by Amazon with relatively little fanfare in late 2017, Fargate has so far not received a great deal of attention from DevOps teams.

But that does not mean that Fargate cannot be a useful addition to your containerized, cloud-native infrastructure. Fargate, which is essentially an orchestration tool for the AWS Elastic Container Service (ECS), takes away the need to worry about the underlying infrastructure that containers run on, and handles the scaling of the infrastructure for you automatically. As a result, you don’t need to worry about patching, cluster capacity management, or infrastructure management. This allows you to focus purely on your application development and the deployment of it to containers.

That’s Fargate in a nutshell. But let’s dive into its specifics. In this article, we’ll cover these objectives:

  • What are the use cases for Fargate?
  • How does this set the scene for future container usage?
  • Getting started with Fargate
  • Deploying a sample application

What are the use cases for Fargate?

In general, it’s fair to say that the use cases are not all that different from standard container scenarios. You should feel as comfortable deploying standard microservices on Fargate as you would deploying to a vanilla ECS or Docker cluster. Many businesses are now looking at migrating on-premises applications to containers, but have found it difficult to find the expertise to deploy complex container clustering environments and have struggled with capacity predictions for such a scenario. Fargate takes away this headache by allowing you to think purely about building your application and deploying it.

How does this set the scene for future container usage?

What Amazon is trying to achieve with Fargate is the ability to make containers more cloud-friendly, while also reducing the amount of time that engineers need to spend focusing on infrastructure management. The ability to simply choose the amount of CPU and RAM resources that your application requires, then click a button to deploy the service, is far more attractive than having to capacity plan and purchase hardware to build a container-hosting environment. This by its nature will prove to be one of the key attractions of using a service like Fargate as businesses look to move away from paying large sums upfront for hardware they may or may not use and instead rely on a cloud service to autoscale based on their application requirements.

Getting started with AWS Fargate

In order to get started with Fargate, you will need to have an AWS account that is ready to use. You can sign up for one at the following URL:

Once you’ve created your account, you can use Fargate an ever-growing list of regions: us-east-1, us-east-2, us-west2, eu-west-1, ap-northeast-1, Asia Pacific (Singapore), Asia Pacific (Sydney), and EU (Frankfurt).

According to AWS, more regions are on the way. Now that you have an account, we can get started by navigating to the following link:

Deploying a sample application

You should now see the following page:

For the purposes of this article, we’re going to use the sample-app provided by AWS, so lets select that and leave the task definition configuration items as they are, then click “Next.”

Next, we can define our service. You can see that our security groups will be created for us automatically, and that we have an option to choose a load balancer. So let’s go ahead and use the Application Load Balancer, then click “Next.”

We then get a reminder that all infrastructure is managed and configured by AWS automatically, with no manual intervention required by us. Great! This page also allows us to set a cluster name and choose a VPC/Subnet. For the latter two, I’m going to let AWS automatically create this for me.

Now, click “Next.”

On the final page, we get to review our selections through the last three steps. We can see that our sample application is getting deployed with Apache 2.4, 512MB RAM, and will be listening on port 80 (HTTP). Furthermore, the load balancer we’ve chosen to create will also be listening on port 80.

If you’re happy with your selections, click the Create button to proceed.

On the “Launch Status” page, you will see a real-time update on the various backend infrastructure services being created to deploy the sample application. This is a perfect time to observe just how much work Fargate is allowing you to achieve in a short space of time without manual intervention from an engineer.

When the creation has completed, you will be able to click the “View Service” button at the top of the page. This takes you to a page detailing the service we’ve just created, and also allows you to make changes if required.

Once everything is showing ACTIVE, we’re ready to view our sample application. To do this, you’ll need to navigate to the Tasks tab, click the task, and look under the Network section where you’ll find IP addresses for your application. You’ll want to copy the Public IP, then paste this into your web browser.

If everything has finished creating, you should see the following when browsing to the IP address in question:


I think that Fargate is going to prove very useful for many businesses, as it allows developers to manage everything from the development of their application through to the final production deployment. Taking away the need to understand infrastructure, and providing the ability to predict capacity requirements means that businesses won’t have to hire additional resources to manage deployments or seek out specific expertise to fit the role.

I believe this is an important development in enabling the deployment of applications to containers, as it offers this possibility to smaller businesses that historically may not have had the knowledge to utilize containers effectively. I look forward to seeing how this helps the uptake of containers going forward!

← Back to All Posts Next Post →