This post orignally appeared on The New Stack.
Containers are a revolutionary technology that allow you to run applications and their required dependencies in an isolated environment, packaged as a single image, which improves reusability and portability, and containers are more lightweight to use compared to virtual machines. In the rush to adopt containers and benefit from their advantages, many companies have moved their software infrastructure into the containerized world.
However, what we’ve learned at my company, Teckro, is that simply moving to containers is not enough on its own to tackle the complexity of scaling distributed applications for a global user base. We have had to work through various challenges in the realms of container orchestration, service-level agreements (and more) in order to see actual benefits from containers.
This article explains how my company’s Docker container strategy has evolved over the past several years.
Starting out small
My company started adopting Docker containers relatively early, back in 2015. The application stack was mostly JVM-based with Angular.js at the front-end. We followed a microservices architecture style and used popular database solutions like MySQL and Redis and Elasticsearch. Our infrastructure components resided on virtual machines, and we followed an Agile development methodology, with a few alterations. We did not use any container orchestration tools or infrastructure as code tools — only Chef and shell scripts.
That worked well in the beginning, but as you can imagine, utilizing container technologies that way introduced more problems rather than solving existing ones. Configuration management issues were problematic, and multiple times, our environments were broken. This caused loss of development/testing time, as trying to build everything locally was error-prone and slow. There were also other issues with improving confidence and handling risk when we were releasing new versions to production. Both delayed our release pipeline for weeks, therefore slowing down business speed and scale.
The lessons learned from this early adoption of containers is that although they are a step forward in improving infrastructure and application status, they are not enough on their own to tackle modern enterprise software challenges.
A logical step after starting with containers is to make them more stable in practice and easier to use and deploy.
Docker Swarm looked like a good solution as it didn’t require a return to the basics, and Kubernetes looked more complicated and risky to use. As the business scaled, we naturally increased our software dependencies to include Cassandra and third-party monitoring providers.The process definitely improved, but it was not ideal. It looked temporary. In the midst of the shift to containers, we had constant issues with environments going down, coupled with issues with Docker storage drivers and configuration management.
The lessons learned were that even though you may manage to keep up a container topology in your infrastructure, that doesn’t mean that all your problems are solved, or that you can easily solve the issues that may arise. In terms of scalability, we had to look at the big picture and anticipate future changes that needed to happen — not only what we were currently facing.
Into the cloud and beyond
Often, when a company grows, there are scaling issues that need to be addressed, and they are a direct result of providing quality services to customers.
Some of the issues we faced are based on that. For example, there was a need to offer the best response time on the request-response cycle. Although our main server farm was located in Europe, the majority of our customer base was in the US. Multiple studies were conducted all over the world — China, Japan, and beyond. A temporary solution for us was a new data center in the US, with the possibility of opening new ones in the future.
However, due to the nature of this industry and our clientele (mostly pharma companies), extra service-level agreements (for example, DDoS protection and extra security controls) were and are necessary. These elements were quite challenging to offer.
So we decided for the sake of scalability and to satisfy client requests to move our infrastructure to the cloud — specifically to AWS and to Cloudflare.
Handling the move in a traditional way is a big effort, and with containers involved, it’s even more tricky.
In our case, there were many obstacles due to unplanned infrastructure needs that we didn’t anticipate. It’s easier to say that all our infrastructure is not tied to application logic (and vice versa). In practice, everything breaks when you try to move containers in a cloud environment.
However, after Herculean team effort (specifically the DevOps teams), we managed after some delays to migrate our infrastructure with relative success.
The lessons learned from this experience were:
- Use Kubernetes: K8s is very stable and works great with containers. It is the evolution of decades of work, in real, large-scale container environments with Google, and it’s fully documented and supported. If you want an enterprise-ready solution for container orchestration, there is no second option;
- Use 12-Factor apps: Without adhering to the 12-Factor App architecture, suddenly your applications are really difficult (almost impossible) to migrate to the cloud. Hard-learned best practices really work great in situations like these, so it’s best if you follow them like the Bible;
- Use infrastructure as code: Modern infrastructure artifacts and deployments need to be scalable to run as well. Without good reproducibility and automation, this can’t happen easily. Tools like Terraform can help streamline container infrastructure deployment in a way that improves collaboration between teams and reduces the risk involved with invalid configuration or copy-paste mistakes.
The future holds many surprises as technologies evolve every day. Technology offers an ongoing learning process and a lot of lessons on how to do things effectively, especially in the containerized world.
Currently at Teckro, we focus on keeping our promises to our customers by improving our automation process and maintaining compliance due to the nature of our industry. I hope that this article gave you some useful insights into the challenges that exist with containers, and that the lessons learned serve as a practical future reference.
- Container Security
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
How to Lock Down the Kernel to Secure the Container HostRead the Blog
One Chapter Ends, Another BeginsRead the Blog
The Greatest Security Risks Lurking in Your CI/CD PipelineRead the Blog
Cloud Platform Radar: Powerful Cloud Asset IdentificationRead the Blog
Securing Serverless Functions: Visibility with Serverless RadarRead the Blog