Amazon Robotics Uses Amazon SageMaker to Enable ML Inferencing at Scale

2021
Amazon Robotics develops sophisticated machinery and software to optimize efficiency in Amazon fulfillment centers. As a purveyor of cutting-edge technologies, Amazon Robotics has long known that using artificial intelligence and machine learning (ML) to automate key aspects of the fulfillment process represented extraordinary potential gains—so in 2017, it devoted teams to accomplishing just that.
 
As the company iterated on its ML project, it turned to Amazon Web Services (AWS) and used Amazon SageMaker, a managed service that helps data scientists and developers prepare, build, train, and deploy high-quality ML models quickly. This freed the Amazon Robotics team from the difficult task of standing up and managing a fleet of GPUs for running inferences at scale across multiple regions. As of January 2021, the solution saved the company nearly 50 percent on ML inferencing costs and unlocked a 20 percent improvement in productivity with comparable overall savings.
Kiva robots on the floor of an Amazon fullfilment center
kr_quotemark

Amazon SageMaker doesn’t just manage the hosts we use for inferencing. It also automatically adds or removes hosts as needed to support the workload.”

Eli Gallaudet
Senior Software Manager, Amazon Robotics

Building an ML Model to Replace Manual Scanning

Amazon Robotics uses its software and machinery to automate the flow of inventory in Amazon fulfillment centers. There are three main physical components to the company’s system: mobile shelving units, robots, and employee workstations. The robots deliver mobile shelving units to stations, and employees either put inventory in (stowing) or take it out (picking). “Our existing stow-and-pick workflows can sometimes create a bottleneck for downstream processing,” says Eli Gallaudet, a senior software manager at Amazon Robotics. “In 2017, we kicked off an initiative to figure out how to make some of those workflows simpler.”

Looking to reduce time-consuming bin scanning, Amazon Robotics built the Intent Detection System, a deep-learning-based computer vision system trained on millions of video examples of stowing actions. The company wanted to train the system to automatically identify where associates place inventory items. Knowing it would need cloud compute to deploy the deep-learning models to Amazon fulfillment centers, Amazon Robotics turned to AWS. The team deployed its models to Docker containers, hosting them using Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service.

Once the team had collected enough video examples of stowing actions, it experimented with applying model architectures to the large annotated video dataset. After several iterations, the team could begin letting the deployed models automate the process.

Shifting Hosting and Management to Amazon SageMaker

Although Amazon Robotics could tap into ample compute resources on AWS, the company still had to handle hosting itself. When AWS announced the release of Amazon SageMaker at AWS re:Invent 2017, Amazon Robotics quickly adopted it, avoiding the need to build a costly hosting solution of its own. Amazon Robotics was the first company to deploy to Amazon SageMaker on a large scale and remains one of the largest deployments as of January 2021.

At first the team primarily used Amazon SageMaker to host models. Amazon Robotics adapted its service usage as needed, initially using a hybrid architecture and running some algorithms on premises and some on the cloud. “We built a core set of functionalities that enabled us to deliver the Intent Detection System,” says Tim Stallman, a senior software manager at Amazon Robotics. “And then as Amazon SageMaker features came online, we slowly started adopting those.” For example, the team adopted Amazon SageMaker Experiments—a capability that enabled the team to organize, track, compare, and evaluate ML experiments and model versions.

Amazon Robotics also used Amazon SageMaker automatic scaling. “Amazon SageMaker doesn’t just manage the hosts we use for inferencing,” says Gallaudet. “It also automatically adds or removes hosts as needed to support the workload.” Because it doesn’t need to procure or manage its own fleet of over 500 GPUs, the company has saved close to 50 percent on its inferencing costs.

Reaping the Benefits of a Managed Solution

Amazon Robotics has seen considerable success. The company has used Amazon SageMaker to reduce time spent on management and to balance the ratio of scientists to software development engineers. Amazon SageMaker also enabled the system to scale horizontally during its rollout across the Amazon fulfillment network—and the team is confident that Amazon SageMaker can handle its peak inference demands.

This solution is backed by Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable compute capacity in the cloud and enables users to quickly migrate host types as newer host types become available. The Amazon Robotics team took advantage of this, initially choosing Amazon EC2 P2 Instances but then migrating to Amazon EC2 G4 Instances powered by NVIDIA T4 Tensor Core GPUs. “After we figured out the right tuning parameters, we were able to get about 40 percent performance improvement,” says Gallaudet. The team also reported a 20 percent cost reduction resulting from the migration.

The Amazon SageMaker–powered solution grew rapidly after its initial deployment. The Amazon Robotics team started implementing the solution on a small scale at a fulfillment center in Wisconsin and rapidly expanded to dozens more. As the solution grew, Amazon SageMaker quickly and seamlessly scaled alongside it. “We expect to almost double our volume in 2021,” says Gallaudet.

Continuing a Steady March of Innovation

To continue making its ML solution as effective as possible, the Amazon Robotics team is looking into using AWS Inferentia—a custom silicon designed to accelerate deep-learning workloads—to further improve performance. The team sees many other opportunities to experiment on AWS, including running its models on the edge using Amazon SageMaker Edge Manager, which efficiently manages and monitors ML models across fleets of smart devices. Amazon Robotics also expects to build models that can further automate package tracking and help automate package damage assessment.

By experimenting with cutting-edge technology, Amazon Robotics continues to increase efficiency in fulfillment centers and improve the Amazon customer experience. “Many of the techniques that we’ve learned and experiences we’ve had with the Intent Detection System have directly enabled us to move quickly on these projects,” says Stallman.

About Amazon Robotics

Amazon Robotics develops software and manufactures machinery to automate the flow of inventory in Amazon fulfillment centers.

Benefits of AWS

  • Saved nearly 50% on inferencing costs
  • Improved computing performance rate by 40%
  • Saved 20% on compute costs by rightsizing Amazon EC2 instances


AWS Services Used

Amazon EC2

Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Learn more »

Amazon EC2 G4 Instances

Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering.

Learn more »

Amazon ECS

Amazon ECS is a fully managed container orchestration service. Customers such as Duolingo, Samsung, GE, and Cookpad use ECS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

Learn more »

Amazon SageMaker

Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML.

Learn more »


Get Started

To learn more, visit aws.amazon.com/sagemaker.