Amazon Robotics Uses Amazon SageMaker to Enable ML Inferencing at Scale
Amazon SageMaker doesn’t just manage the hosts we use for inferencing. It also automatically adds or removes hosts as needed to support the workload.”
Senior Software Manager, Amazon Robotics
Building an ML Model to Replace Manual Scanning
Amazon Robotics uses its software and machinery to automate the flow of inventory in Amazon fulfillment centers. There are three main physical components to the company’s system: mobile shelving units, robots, and employee workstations. The robots deliver mobile shelving units to stations, and employees either put inventory in (stowing) or take it out (picking). “Our existing stow-and-pick workflows can sometimes create a bottleneck for downstream processing,” says Eli Gallaudet, a senior software manager at Amazon Robotics. “In 2017, we kicked off an initiative to figure out how to make some of those workflows simpler.”
Looking to reduce time-consuming bin scanning, Amazon Robotics built the Intent Detection System, a deep-learning-based computer vision system trained on millions of video examples of stowing actions. The company wanted to train the system to automatically identify where associates place inventory items. Knowing it would need cloud compute to deploy the deep-learning models to Amazon fulfillment centers, Amazon Robotics turned to AWS. The team deployed its models to Docker containers, hosting them using Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service.
Once the team had collected enough video examples of stowing actions, it experimented with applying model architectures to the large annotated video dataset. After several iterations, the team could begin letting the deployed models automate the process.
Shifting Hosting and Management to Amazon SageMaker
At first the team primarily used Amazon SageMaker to host models. Amazon Robotics adapted its service usage as needed, initially using a hybrid architecture and running some algorithms on premises and some on the cloud. “We built a core set of functionalities that enabled us to deliver the Intent Detection System,” says Tim Stallman, a senior software manager at Amazon Robotics. “And then as Amazon SageMaker features came online, we slowly started adopting those.” For example, the team adopted Amazon SageMaker Experiments—a capability that enabled the team to organize, track, compare, and evaluate ML experiments and model versions.
Amazon Robotics also used Amazon SageMaker automatic scaling. “Amazon SageMaker doesn’t just manage the hosts we use for inferencing,” says Gallaudet. “It also automatically adds or removes hosts as needed to support the workload.” Because it doesn’t need to procure or manage its own fleet of over 500 GPUs, the company has saved close to 50 percent on its inferencing costs.
Reaping the Benefits of a Managed Solution
Amazon Robotics has seen considerable success. The company has used Amazon SageMaker to reduce time spent on management and to balance the ratio of scientists to software development engineers. Amazon SageMaker also enabled the system to scale horizontally during its rollout across the Amazon fulfillment network—and the team is confident that Amazon SageMaker can handle its peak inference demands.
This solution is backed by Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable compute capacity in the cloud and enables users to quickly migrate host types as newer host types become available. The Amazon Robotics team took advantage of this, initially choosing Amazon EC2 P2 Instances but then migrating to Amazon EC2 G4 Instances powered by NVIDIA T4 Tensor Core GPUs. “After we figured out the right tuning parameters, we were able to get about 40 percent performance improvement,” says Gallaudet. The team also reported a 20 percent cost reduction resulting from the migration.
The Amazon SageMaker–powered solution grew rapidly after its initial deployment. The Amazon Robotics team started implementing the solution on a small scale at a fulfillment center in Wisconsin and rapidly expanded to dozens more. As the solution grew, Amazon SageMaker quickly and seamlessly scaled alongside it. “We expect to almost double our volume in 2021,” says Gallaudet.
Continuing a Steady March of Innovation
By experimenting with cutting-edge technology, Amazon Robotics continues to increase efficiency in fulfillment centers and improve the Amazon customer experience. “Many of the techniques that we’ve learned and experiences we’ve had with the Intent Detection System have directly enabled us to move quickly on these projects,” says Stallman.
About Amazon Robotics
Benefits of AWS
- Saved nearly 50% on inferencing costs
- Improved computing performance rate by 40%
- Saved 20% on compute costs by rightsizing Amazon EC2 instances
AWS Services Used
Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2 G4 Instances
Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering.
Amazon ECS is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.
Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML.
To learn more, visit aws.amazon.com/sagemaker.