Meteor Development Group recently launched Galaxy, a managed cloud service for deploying and managing Meteor apps for its customers. “Because we want Meteor to be a complete platform that addresses the needs of a team developing modern apps, it has to have a runtime in the cloud that manages connected clients. That’s what Galaxy is,” says Matt DeBergalis, cofounder and VP Product of Meteor Development Group. Galaxy is based on containers—deployable images that consist of applications plus all the dependencies, libraries, and configuration files needed to run them. As the number of Meteor apps continued to grow, the company needed a way to manage the increasing number of Galaxy containers. “There are hundreds of thousands of containers, and we want to be able to easily manage them,” says DeBergalis. “We wanted to automate container orchestration, so we could address things like how to replace virtual machines if there’s an error and how those machines are distributed across different Availability Zones.”
Meteor also needed to meets its customers’ production needs. “Many of our customers insist on running their apps in a highly available configuration, so major infrastructure outages don’t impact their customers,” DeBergalis says.
The company also needed the ability to scale Galaxy to meet customer demands. “We’re growing in popularity, and we have thousands of apps being built by our customers each month,” says DeBergalis. “So we have scaling requirements around building Galaxy to meet those customer needs. We needed a cloud environment that could grow along with us.” Additionally, Meteor sought more responsiveness for its customers’ development teams. “If it takes 10 minutes to spin up an infrastructure with virtual machines, that’s not a great experience for our developers,” says DeBergalis.
When Meteor was building Galaxy, it was primarily interested in basing it on the Amazon Web Services (AWS) cloud platform. “We committed to AWS very early in the process,” says DeBergalis. “When we looked at technologies that would enable a world-class developer experience, AWS was a natural choice.”
Meteor knew it wanted to run its customers’ applications in Docker containers. Its next decision was how to orchestrate these containers atop Amazon Elastic Compute Cloud (Amazon EC2) virtual machines. Meteor evaluated several technologies, including Kubernetes, Mesos, and the Amazon EC2 Container Service (Amazon ECS). Amazon ECS was still in beta at the time but quickly matured to support the features Meteor needed, such as the ability to run across multiple availability zones. The Meteor team was impressed by the quick enhancements to Amazon ECS, and when they started prototyping Galaxy with it, they made rapid progress.
Using Amazon ECS allowed Meteor to build Galaxy more easily. “Amazon ECS is basically an API we can call in AWS to describe how many containers we want and where we want them to run,” says DeBergalis. “Amazon ECS takes care of all the details around how that happens, which really simplifies container orchestration.” Galaxy also takes advantage of Elastic Load Balancing, which automatically routes incoming traffic from the Internet to the Galaxy environment. In addition, Meteor uses AWS CloudFormation to create and manage its Galaxy compute resources in AWS. “Galaxy is a complex system, with a lot of moving parts. With CloudFormation, we can provision all those parts very quickly,” says DeBergalis.
For Meteor, it is now simple to orchestrate and manage its Galaxy container clusters. “What’s unique about Galaxy isn’t how container management works—it’s the management of the connected clients on top of the platform,” says DeBergalis. “That’s where we want to spend our time, and we can do that because Amazon ECS minimizes the number of technical considerations we need to worry about.” Amazon ECS also integrates well with other AWS technologies, according to DeBergalis. “Amazon ECS integrates with other parts of our AWS stack in ways that save us a lot of time and code,” he says. “For example, if I register a service within ECS, that service is automatically registered within Elastic Load Balancing. That’s a lot of behind the scenes functionality that we don’t have to worry about.”
As a result of simplified container management, Meteor was able to quickly get Galaxy into customers’ hands. “When we launched Galaxy, we had customers in production within a month, and that was only possible because Amazon ECS helps us run a reliable container orchestration service. It gave us a lot of confidence as far as bringing customer workloads onto Galaxy,” says DeBergalis. “With ECS, we were able carve out a big technical piece of our infrastructure from the usual development and testing cycles.”
AWS also supports multiple Availability Zones, which was critical to the Galaxy platform. “In order to deliver a highly available platform to our users, we need a container orchestration service that is highly available as well,” DeBergalis says. “We need to be able to run users’ applications across multiple zones, so that if one chunk of the service goes offline, other parts are unaffected. Amazon ECS is unique in that it addresses availability as a turnkey capability, rather than requiring us to write a complex piece of software to do it. With ECS, we have a simple architecture that was faster to build, easier to test, and simpler to maintain than managing those technical details inside of Galaxy itself.”
Meteor has also addressed its scalability challenges by using AWS. “There’s a very strong story around scalability in AWS,” says DeBergalis. “Our big questions were, ‘Can we scale the amount of compute resources necessary to run all our customers’ apps?’ and, ‘Can we scale the mechanics of coordinating all those pieces?’ Using AWS, we can answer ‘yes’ to both.”
In addition, Meteor’s own developers can operate faster. “Our engineering team developing Galaxy can move much faster because of the elasticity they get by using AWS,” says DeBergalis. “For provisioning new resources, it really just takes the push of a few buttons, and we have an entire infrastructure that makes it easy for us to build end-to-end tests across the entire system. It’s now easy to do scalability testing as a normal part of our development process, rather than as a huge effort at the end of the process, hoping it all works out.”
To learn more about how AWS can help you simplify cluster management, visit our Amazon EC2 Container Service details page.