AWS News Blog

Container Computing and AWS

Big changes in the technology world seem to come about in two ways. Sometimes there’s a big splashy announcement and a visible public leap in to the future. Most of the time, however, change is a bit more subtle. Early adopters find a new technology that makes them more productive and share it amongst themselves. Over time the news spreads to others. At some point the once-new technology suddenly (for those who haven’t been paying attention) seems to have become very popular, seemingly overnight! This technology adoption model can be seen in the recent growth in the popularity of container computing, exemplified by the rising awareness of Docker. Containers are lightweight, portable, and self-sufficient. Even better, they can be run in a wide variety of environments. You can, if you’d like, build and test a container locally and then deploy it to Amazon Elastic Compute Cloud (Amazon EC2) for production.

Benefits of Container Computing
Let’s take a closer look at some of the benefits that accrue when you create your cloud-based application as a collection of containers, each specified declaratively and mapped to a single, highly specific aspect of your architecture:

  • Consistency & Fidelity – There’s nothing worse than creating something that works great in a test environment yet fails or runs inconsistently when moved to production. When you are building and releasing code in an agile fashion, wasting time debugging issues that arise from differences between environments is a huge barrier to productivity. The declarative, all-inclusive packaging model used by Docker gives you the power to enumerate your application’s dependencies. Your application will have access to the same libraries and utilities, regardless of where it is running.
  • Distributed Application Platform – If you build your application as a set of distributed services, each in a Docker container running CoreOS, they can easily find and connect to each other, perhaps with the aid of a scheduler like Mesosphere. This will allow you to deploy and then easily scale containers across a “grid” of EC2 instances.
  • Development Efficiency – Building your application as a collection of tight, focused containers allows you to build them in parallel with strict, well-defined interfaces. With better interfaces between moving parts, you have the freedom to improve and even totally revise implementations without fear of breaking running code. Because your application’s dependencies are spelled out explicitly and declaratively, less time will be lost diagnosing, identifying, and fixing issues that arise from missing or obsolete packages.
  • Operational Efficiency – Using containers allows you to build components that run in isolated environments (limiting the ability of one container to accidentally disrupt the operation of another) while still being able to cooperatively share libraries and other common resources. This opportunistic sharing reduces memory pressure and leads to increased runtime efficiency. If you are running on EC2 (Docker is directly supported on the Amazon Linux AMI and on AWS Elastic Beanstalk, and can easily be used with AWS OpsWorks), you can achieve isolation without running each component on a separate instance. Containers are not a replacement for instances; they are destined to run on them!

Container Computing Resources
In order to prepare to write this post, I spent some time reading up on container computing and Docker. Here are the articles, blog posts, and videos that I liked the best:

Moving Forward
I am really excited by container computing and hope that you are as well. Please feel free to share additional resources and success stories with me and I’ll update this post and our new page accordingly.

Jeff;

Jeff Barr

Jeff Barr

Jeff Barr is Chief Evangelist for AWS. He started this blog in 2004 and has been writing posts just about non-stop ever since.