Customer Stories / Retail

2022
Amazon Logo

Amazon Robotics Uses Amazon SageMaker and AWS Inferentia to Enable ML Inferencing at Scale

Amazon Robotics used Amazon SageMaker to develop a sophisticated machine learning model that replaced manual scanning in Amazon fulfillment centers. Amazon Robotics set out to use machine learning to streamline a bottleneck in its inventory stowing process. The company overcame challenges in compute and hosting using Amazon SageMaker, ultimately reducing inferencing costs by nearly 50 percent.

Saved nearly 50%

on inferencing costs

Improved computing performance

rate by 40%

Saved 20%

on compute costs by rightsizing Amazon EC2 instances

Overview

Amazon Robotics develops sophisticated machinery and software to optimize efficiency in Amazon fulfillment centers. As a purveyor of cutting-edge technologies, Amazon Robotics has long known that using artificial intelligence and machine learning (ML) to automate key aspects of the fulfillment process represented extraordinary potential gains—so in 2017, it devoted teams to accomplishing just that.
 
As the company iterated on its ML project, it turned to Amazon Web Services (AWS) and used Amazon SageMaker, a managed service that helps data scientists and developers prepare, build, train, and deploy high-quality ML models quickly. This freed the Amazon Robotics team from the difficult task of standing up and managing a fleet of GPUs for running inferences at scale across multiple regions. As of January 2021, the solution saved the company nearly 50 percent on ML inferencing costs and unlocked a 20 percent improvement in productivity with comparable overall savings. Continuing to optimize, at the end of 2021 the Robotics team shifted its deployment from GPU instances to AWS Inferentia-based Amazon EC2 Inf1 instances to save an additional 35 percent and see 20 percent higher throughput.
Amazon fulfillment center floor

Opportunity | Building an ML Model to Replace Manual Scanning

Amazon Robotics uses its software and machinery to automate the flow of inventory in Amazon fulfillment centers. There are three main physical components to the company’s system: mobile shelving units, robots, and employee workstations. The robots deliver mobile shelving units to stations, and employees either put inventory in (stowing) or take it out (picking). “Our existing stow-and-pick workflows can sometimes create a bottleneck for downstream processing,” says Eli Gallaudet, a senior software manager at Amazon Robotics. “In 2017, we kicked off an initiative to figure out how to make some of those workflows simpler.”

Looking to reduce time-consuming bin scanning, Amazon Robotics built the Intent Detection System, a deep-learning-based computer vision system trained on millions of video examples of stowing actions. The company wanted to train the system to automatically identify where associates place inventory items. Knowing it would need cloud compute to deploy the deep-learning models to Amazon fulfillment centers, Amazon Robotics turned to AWS. The team deployed its models to Docker containers, hosting them using Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service.

Once the team had collected enough video examples of stowing actions, it experimented with applying model architectures to the large annotated video dataset. After several iterations, the team could begin letting the deployed models automate the process.

kr_quotemark

Our system will use more than 1000 SageMaker hosts in 2022, AWS Inferentia gives us the opportunity to serve the rapidly growing traffic at 35 percent lower cost and 20 percent higher throughput, without re-training our ML models."

Pei Wang
Software Engineer, Amazon Robotics

Solution | Shifting Hosting and Management to Amazon SageMaker

Although Amazon Robotics could tap into ample compute resources on AWS, the company still had to handle hosting itself. When AWS announced the release of Amazon SageMaker at AWS re:Invent 2017, Amazon Robotics quickly adopted it, avoiding the need to build a costly hosting solution of its own. Amazon Robotics was the first company to deploy to Amazon SageMaker on a large scale and remains one of the largest deployments as of January 2021.

At first the team primarily used Amazon SageMaker to host models. Amazon Robotics adapted its service usage as needed, initially using a hybrid architecture and running some algorithms on premises and some on the cloud. “We built a core set of functionalities that enabled us to deliver the Intent Detection System,” says Tim Stallman, a senior software manager at Amazon Robotics. “And then as Amazon SageMaker features came online, we slowly started adopting those.” For example, the team adopted Amazon SageMaker Experiments—a capability that enabled the team to organize, track, compare, and evaluate ML experiments and model versions.

Amazon Robotics also used Amazon SageMaker automatic scaling. “Amazon SageMaker doesn’t just manage the hosts we use for inferencing,” says Gallaudet. “It also automatically adds or removes hosts as needed to support the workload.” Because it doesn’t need to procure or manage its own fleet of over 500 GPUs, the company has saved close to 50 percent on its inferencing costs.

Reaping the Benefits of a Managed Solution and AWS Inferentia  

Amazon Robotics has seen considerable success. The company has used Amazon SageMaker to reduce time spent on management and to balance the ratio of scientists to software development engineers. Amazon SageMaker also enabled the system to scale horizontally during its rollout across the Amazon fulfillment network—and the team is confident that Amazon SageMaker can handle its peak inference demands.

This solution is backed by Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable compute capacity in the cloud and enables users to quickly migrate host types as newer host types become available. The Amazon Robotics team was able to reduce their inference costs by 20 percent by migrating from Amazon EC2 P2 Instances to Amazon EC2 G4 Instances. Now utilizing AWS Inferentia, the Amazon Robotics team is able to further reduce inference costs by 35 percent over G4 instances (over 50 percent reduction from P2 instances) and Inferentia has delivered 20 percent higher throughput, allowing them to scan more packages a day without requiring more resources. “Our system will use more than 1000 SageMaker hosts in 2022 and AWS Inferentia helps us to serve the rapidly growing traffic at higher throughput without re-training our ML models," says Pei Wang, a software engineer at Amazon Robotics.

The Amazon SageMaker–powered solution grew rapidly after its initial deployment. The Amazon Robotics team started implementing the solution on a small scale at a fulfillment center in Wisconsin and rapidly expanded to dozens more. As the solution grew, Amazon SageMaker quickly and seamlessly scaled alongside it. “We expect to almost double our volume in 2022,” says Gallaudet.

Outcome | Continuing a Steady March of Innovation

The team sees many other opportunities to experiment on AWS, including running its models on the edge using Amazon SageMaker Edge Manager, which efficiently manages and monitors ML models across fleets of smart devices. Amazon Robotics also expects to build models that can further automate package tracking and help automate package damage assessment.

By experimenting with cutting-edge technology, Amazon Robotics continues to increase efficiency in fulfillment centers and improve the Amazon customer experience. “Many of the techniques that we’ve learned and experiences we’ve had with the Intent Detection System have directly enabled us to move quickly on these projects,” says Stallman.

About Amazon Robotics

Amazon Robotics develops software and manufactures machinery to automate the flow of inventory in Amazon fulfillment centers.

AWS Services Used

Amazon EC2

Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Learn more »

Amazon EC2 G4 Instances

Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering.

Learn more »

Amazon ECS

Amazon ECS is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

Learn more »

Amazon SageMaker

Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML.

Learn more »

Explore Amazon's journey of innovation using AWS

More Amazon Stories

no items found 

1

Get Started

Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.