AWS Cloud Enterprise Strategy Blog

Container Powered AWS Migration in Action

https://pixabay.com/en/container-container-ship-port-1638068/

“I had to wait most of the day to deliver the bales, sitting there in my truck, watching stevedores load other cargo. It struck me that I was looking at a lot of wasted time and money. I watched them take each crate off a truck and slip it into a sling, which would then lift the crate into the hold of the ship. Once there, every sling had to be unloaded, and the cargo stowed properly. The thought occurred to me, as I waited around that day, that it would be easier to lift my trailer up and, without any of its contents being touched, put it on the ship.” — Malcolm McLean, inventor of the shipping container

The intermodal shipping container revolutionized transport and international trade because the shipping process was re-engineered around a standard container in which many items could fit versus dealing with each item individually. In the same way, Docker and container technology revolutionizes the way applications are packaged up by providing a standard container to envelop all the software and dependencies required for the applications.

I think of the container ship as being the OS and the software as being the shipping items. To take the analogy further, often times the ship or the items had to be modified to load the item. In the same way, the OS is often tailored for the specific application or vice versa (e.g. run time engines or libraries). Docker solves that by wrapping up everything the application needs so that it becomes far more portable, isolated, stable, and reliable. The AWS EC2 Container Service (ECS) makes things even easier by providing a scalable managed Docker service.

I’ve asked Aater Suleman, CEO and Co-founder of Flux7, to talk about how containers can help with migrating workloads to AWS. Flux7 is an AWS Partner specializing in DevOps and Migrations. What caught my attention when I was introduced to Flux7 is its specialization in Docker and containers that migrate workloads to AWS.

Joe
chung@amazon.com
@chunjx
http://aws.amazon.com/enterprise/

Containers are an ideal foundation for migration factories, allowing organizations to create a reproducible workflow of items that enables teams to easily transition their applications into new environments. Factories can use blocks and containers that are re-used from other teams. Containers allow this because you get a fresh environment; there is no chance of getting static or secure data from another team because the ephemeral state of the container would have been destroyed when that team’s migration was completed.

We recently had the opportunity to work with Rent-A-Center to perform a migration from its datacenter to AWS. This successful containerized migration to AWS is a great example of application migration and refactoring using containerized technologies. (In fact, we presented this case study, “Getting Technically Inspired by Container Powered Migrations” at re:Invent with Mandus Momberg, AWS Partner Solutions Architect.)

As Stephen Orban aptly points out in an earlier blog — “Cloud-Native or Lift-and-Shift?” — there is no one-size-fits-all migration strategy. However, this use case provides some good lessons for any organization looking to use containers to help create repeatability, scalability, efficiency, and agility in its migrations.

For Rent-A-Center, the Flux7 team used Docker with Amazon ECS and Auto Scaling to migrate its ecommerce platform to AWS. With AWS, containers and a CI/CD pipeline already in use at Rent-A-Center, we worked to leverage as many existing AWS services as we could in order to maximize agility, speed, and automation. Then, we took SAP Hybris and containerized it, running it on top of ECS and using AWS WAF, CloudFront, and Aurora DB.

Containerizing Stateful Applications
I often get asked about containerizing SAP Hybris because it is a stateful application, where every node has a fixed IP address and Hybris nodes need to be aware of each other. Given the nature of AWS and autoscaling, we obviously had to find a new way to manage node discovery. Our solution: to load a list of IP addresses of the other nodes into a database on the fly from where they could be loaded. With all the IP addresses of the different nodes available in the database — even if they are changing — the nodes are still able to get what they need and continue operating from there.

While this solves a problem, Hybris also needs to know the host IP, not the IP address of the container it is running in. However, by definition a container is not allowed to know the IP address of its host. The solution included a startup script for every container. When it started, the script found out through querying of the metadata on the EC2 instance (which is luckily available inside the container) what the IP address of the host was, and this was made available to the Hybris application as part of a config file.

Autoscaling the Containers
As you can see in the diagram below, we created two ECS clusters, both of which are multi-AZ. The underlying EC2 are part of an auto scaling group. As a result, the solution is fully automated, with failover of the underlying nodes in the cluster. Running on top of this substrate of EC2 instances are the Docker containers for each one of the services — with auto scaling at each layer. The result: the containers could scale up or down for an individual service depending on the load.

In addition, if the number of containers expanded to a point where the number of EC2 instances were not enough to support them, the underlying EC2 instances would scale up to create more room for additional containers. Similarly, both layers would scale down, coordinating with each other.

Pipeline Speeds Migrations
Last, the separation created by Docker containers makes the pipeline completely reusable at Rent-A-Center. In fact, a lot of teams have used it because most of what is available is externalized configuration. That is, the configuration is carried with the application itself; the app carries the Dockerfile, the software, the instructions to build the software, and the prerequisites as a part of the Dockerfile. As a result, Rent-A-Center has a pipeline where you can add any app to it, and the app will go through the same pipeline and be deployed on the same cluster with no changes needed. This approach effectively enables organizations to speed migrations and increase their agility, pushing multiple applications through once the pipeline is established.

In all, our container-powered approach helped deliver the migration quickly, in a secure, highly available, PCI-compliant fashion. And, as proof of the ultimate solution’s flexibility, Rent-A-Center’s e-commerce system saw a 42% increase — with more than nine million hits — over Black Friday, without missing a beat.

Joe Chung

Joe Chung

Joe joined AWS as Enterprise Strategist & Evangelist in November 2016. In this role, Joe works with enterprise technology executives to share experiences and strategies for how the cloud can help them increase speed and agility while devoting more of their resources to their customers. Joe earned his bachelor's degree in mechanical engineering from the University of Michigan at Ann Arbor. He also earned his master's in business administration from Kellogg's Management School of Business at Northwestern University.