Amazon SageMaker customers

See how leading organizations worldwide are using Amazon SageMaker to build, train, and deploy machine learning (ML) models.
Thomson Reuters

GoDaddy

At GoDaddy, we aim to help everyday entrepreneurs succeed by giving them the tools for establishing their business. "We serve customers with diverse needs. They often communicate with the businesses they support around the clock and across multiple channels including email, chat and social media,” said Jing Xi, VP Applied ML and AI, GoDaddy. “Today, generative AI levels the playing field for small businesses by giving them an incredible amount of power and knowledge, normally reserved for large corporations, right at their fingertips. However, one of the biggest challenges that our generative AI development teams face is trying to figure out which FM is right for their business applications. It is important for us to be able to easily compare models based on specific criteria that is most important to our customers and achieve the right balance between model cost, latency, and model accuracy and performance. Amazon SageMaker’s new model evaluation capability helps us to accelerate the time it takes to get from idea to implementation by removing the complexities involved in the model selection process, and easily run experimentation, development, deployment, and management of new versions of these models. We’re excited to expand access to this new capability across more teams so our developers can increase their productivity, and further unlock the power of generative AI for customers to grow their business".
Thomson Reuters

Thomson Reuters

“Thomson Reuters has been at the forefront of AI development for over 30 years, and we are committed to providing meaningful solutions that help our customers deliver results faster, with better access to trusted information. To accelerate our innovation in generative AI, in addition to partnering with LLM providers, we also are exploring training custom models more efficiently with our unique and proprietary content and human expertise. SageMaker HyperPod’s distributed training libraries helps us improve large scale model training performance. And its resiliency feature saves time as we monitor and manage infrastructure. Training our foundation models on SageMaker HyperPod will increase our speed to market and help us provide quality solutions for our customers at pace.”

Joel Hron, Head of AI and Labs - Thomson Reuters

Hugging Face

Hugging Face

"Hugging Face has been using SageMaker HyperPod to create important new open foundation models like StarCoder, IDEFICS, and Zephyr which have been downloaded millions of times. SageMaker HyperPod’s purpose-built resiliency and performance capabilities have enabled our open science team to focus on innovating and publishing important improvements to the ways foundation models are built, rather than managing infrastructure. We especially liked how SageMaker HyperPod is able to detect ML hardware failure and quickly replace the faulty hardware without disrupting ongoing model training. Because our teams need to innovate quickly, this automated job recovery feature helped us minimize disruption during the foundation model training process, helping us save hundreds of hours of training time in just a year.”

Jeff Boudier, head of Product at Hugging Face.

Hugging Face

Perplexity AI

“We were looking for the right ML infrastructure to increase productivity and reduce costs in order to build high-performing large language models. After running a few successful experiments, we switched to AWS from other cloud providers in order to use Amazon SageMaker HyperPod. We have been using HyperPod for the last four months to build and fine-tune the LLMs to power the Perplexity conversational answer engine that answers questions along with references provided in the form of citations. Because SageMaker HyperPod automatically monitors cluster health and remediates GPU failures, our developers are able to focus on model building instead of spending time on managing and optimizing the underlying infrastructure. SageMaker HyperPod’s built-in data and model parallel libraries helped us optimize training time on GPUs and double the training throughput. As a result, our training experiments can now run twice as fast, which means our developers can iterate more quickly, accelerating the development of new generative AI experiences for our customers.”

Aravind Srinivas, co-founder and CEO at Perplexity AI

Hugging Face

Workday

"More than 10,000 organizations around the world rely on Workday to manage their most valuable assets—their people and their money. We provide responsible and transparent solutions to customers by selecting the best foundation model that reflects our company’s policies around the responsible use of AI. For tasks such as creating job descriptions, which must be high quality and promote equal opportunity, we tested the new model evaluation capability in Amazon SageMaker and are excited about the ability to measure foundation models across metrics such as bias, quality, and performance. We look forward to using this service in the future to compare and select models that align with our stringent responsible AI criteria.”

Shane Luke, vice president of AI and Machine Learning at Workday.
 

Hugging Face

Salesforce

"At Salesforce, we have an open ecosystem approach to foundation models, and Amazon SageMaker is a vital component, helping us scale our architecture and accelerate our go-to-market. Using the new SageMaker Inference capability, we were able to put all our models onto a single SageMaker endpoint which automatically handled all the resource allocation and sharing of the compute resources, accelerating performance and reducing deployment cost of foundation models.”

Bhavesh Doshi, vice president of Engineering at Salesforce.
 

Freddy's

Bain & Co

"One of the biggest challenges for Aura is to extract meaningful insights from a vast pool of unstructured professional data. By employing large language models via Amazon SageMaker Canvas, we've automated the data extraction process, transforming how companies assess workforce competencies and organizational structures. This approach not only helped us scale data analysis, but also bypassed the limitations of traditional data analysis methods like keyword matching. Using SageMaker Canvas’s new data preparation and LLM capabilities, Aura is able to quantitatively score and benchmark companies on the effectiveness of their organization structure, skills of the workforce, and performancein terms of financial results."

Purna Doddapaneni, CTO of Founder’s Studio and partner at Bain & Co.

Hugging Face

Wix

“Amazon SageMaker inference helps us deploy models across multiple Availability Zones and runs predictions at scale, either online or in batch mode.”

Itamar Keller, Research and Development Team Leader, Wix

Hugging Face

Qred

“With a centralized platform using Amazon SageMaker, compliance is simpler. It’s simpler to add sensitive data when we have it centralized and secured.”

Lezgin Bakircioglu, Chief Technology Officer, Qred

Stability AI

Stability AI

“As the leading open-source generative AI company, our goal is to maximize the accessibility of modern AI. We are building foundation models with tens of billions of parameters, which require the infrastructure that can scale optimized training performance. With SageMaker HyperPod’s managed infrastructure and optimization libraries, we can reduce training time and costs by over 50%. It makes our model training more resilient and performant to build state-of-the-art models faster.”

Emad Mostaque, Founder and CEO - Stability AI

iFood
“At iFood, we strive to delight our customers through our services using technology such as machine learning (ML). Building a complete and seamless workflow to develop, train, and deploy models has been a critical part of our journey to scale ML. Amazon SageMaker Pipelines helps us to quickly build multiple scalable automated ML workflows, and makes it easy to deploy and manage our models effectively. SageMaker Pipelines enables us to be more efficient with our development cycle. We continue to emphasize our leadership in using AI/ML to deliver superior customer service and efficiency with all these new capabilities of Amazon SageMaker.”

Sandor Caetano, Chief Data Scientist, iFood

Care.com
“A strong care industry where supply matches demand is essential for economic growth from the individual family up to the nation’s GDP. We’re excited about Amazon SageMaker Pipelines, as we believe it will help us scale better across our data science and development teams, by using a consistent set of curated data that we can use to build scalable end-to-end machine learning (ML) model pipelines from data preparation to deployment. With the newly announced capabilities of Amazon SageMaker, we can accelerate development and deployment of our ML models for different applications, helping our customers make better informed decisions through faster real-time recommendations.”

Clemens Tummeltshammer, Data Science Manager, Care.com

3M
“Using ML, 3M is improving tried-and-tested products, like sandpaper, and driving innovation in several other spaces, including healthcare. As we plan to scale machine learning to more areas of 3M, we see the amount of data and models growing rapidly – doubling every year. We are enthusiastic about the new SageMaker features because they will help us scale. Amazon SageMaker Data Wrangler makes it much easier to prepare data for model training, and Amazon SageMaker Feature Store will eliminate the need to create the same model features over and over. Finally, Amazon SageMaker Pipelines will help us automate data prep, model building, and model deployment into an end to end workflow so we can speed time to market for our models. Our researchers are looking forward to the taking advantage of the new speed of science at 3M.”

David Frazee, Technical Director at 3M Corporate Systems Research Lab

“With Amazon SageMaker JumpStart, we were able to experiment with several foundation models, select the ones that best fit our needs in healthcare, and quickly launch ML applications using SageMaker’s HIPAA-compliant model deployment. This has allowed us to improve the speed and scale of the data entry process for prescriptions and of customer care.”

Alexandre Alves, Sr. Principal Engineer, Amazon Pharmacy

Canva
“At Canva, we’re on a mission to empower the world to design and make it easy for anyone to create something beautiful on any device. With generative AI, we’re helping users bring their ideas to life with as little friction as possible. Thanks to SageMaker JumpStart, we’re able to empower our teams to get started with generative AI and test various foundation models. In our global hackathon, Canvanauts were able to easily deploy a wide variety of foundation models and get their projects up and running. It was a key part of our hackathon’s success.”

Nic Wittison, Engineering Lead for AI Products, Canva

Dovetail
“At Dovetail, we’re helping organizations improve the quality of their products and services through the power of better understanding their customers. With Amazon SageMaker JumpStart, we’re able to easily access, test, and deploy cutting-edge foundation models. We used AI21 Jurassic-2 Mid to enable enhanced summarization and were able to integrate it into our SaaS application within weeks, instead of taking months to implement. Our customers can now efficiently distill and understand insights from their data while maintaining data privacy and security assurance.”

Chris Manouvrier, Enterprise Architect Manager, Dovetail

Lexitas
“Our clients have thousands of legal documents and the process of parsing through these documents is tedious and time consuming. Often times, there isn’t a way to quickly get answers, such as understanding who asked a question in a deposition. Now with Amazon SageMaker JumpStart, we can access state of the art foundation models to power our products so customers can address a variety of use cases, such as contradiction detection and semantic searching, through thousands of documents at once. Attorneys can now leverage past transcripts to prepare for future events, while maintaining strict security and compliance needs.”

Jason Primuth, Chief Innovation Officer, Lexitas

Tyson
“At Tyson Foods, we continue to look for new ways to use machine learning (ML) in our production process to improve productivity. We use image classification models to identify products from the production line that require package labels. However, the image classification models need to be retrained with new images from the field on a recurring basis. Amazon SageMaker JumpStart enables our data scientists to share ML models with support engineers so they can train ML models with new data without writing any code. This accelerates the time-to-market of ML solutions, promotes continuous improvements, and increases productivity.”

Rahul Damineni, Specialist Data Scientist, Tyson Foods

Mission Automate
“Thanks to Amazon SageMaker JumpStart, we are able to launch ML solutions within days to fulfill machine learning prediction needs faster and more reliably.”

Alex Panait, CEO, Mission Automate

Mycase
“Thanks to Amazon SageMaker JumpStart, we can have better starting points which makes it so that we can deploy a ML solution for our own use cases in 4-6 weeks instead of 3-4 months.”

Gus Nguyen, Software Engineer, MyCase

Pivotree
“With Amazon SageMaker JumpStart, we can build ML applications such as automatic anomaly detection or object classification faster and launch solutions from proof of concept to production within days.”

Milos Hanzel, Platform Architect, Pivotree  

Bundesliga
Bundesliga Match Facts, powered by AWS, provides a more engaging fan experience during soccer matches for Bundesliga fans around the world. With Amazon SageMaker Clarify, the Bundesliga can now interactively explain what some of the key, underlying components are in determining what led the ML model to predict a certain xGoals value. Knowing respective feature attributions and explaining outcomes helps in model debugging and increasing confidence in ML algorithms, which results in higher-quality predictions.
 
"Amazon SageMaker Clarify seamlessly integrates with the rest of the Bundesliga Match Facts digital platform and is a key part of our long-term strategy of standardizing our ML workflows on Amazon SageMaker. By using AWS’s innovative technologies, such as machine learning, to deliver more in-depth insights and provide fans a better understanding of the split-second decisions made on the pitch, Bundesliga Match Facts enables viewers to gain deeper insights into the key decisions in each match."

Andreas Heyden, Executive Vice President of Digital Innovations, DFL Group

capcom
CAPCOM is a Japanese game company famous for game titles such as the Monster Hunter series and Street Fighter. In order to keep users' satisfaction, CAPCOM needed to assure game quality and identify possible churners and their trends.
 
"The combination of AutoGluon and Amazon SageMaker Clarify enabled our customer churn model to predict customer churn with 94% accuracy. SageMaker Clarify helps us understand the model behavior by providing explainability through SHAP values. With SageMaker Clarify, we reduced the computation cost of SHAP values by up to 50% compared to a local calculation. The joint solution gives us the ability to better understand the model and improve customer satisfaction at a higher rate of accuracy with significant cost savings."

Masahiro Takamoto, Head of Data Group, CAPCOM

DOMO
Domo is the Business Cloud, transforming the way business is managed by delivering Modern BI for All. With Domo, critical processes that took weeks, months, or more can now be done on-the-fly, in minutes or seconds, at unbelievable scale.
 
"Domo offers a scalable suite of data science solutions that are easy for anyone in an organization to use and understand. With Clarify, our customers are enabled with important insights on how their AI models are making predictions. The combination of Clarify with Domo helps to increase AI speed and intelligence for our customers by putting the power of AI into the hands of everyone across their business and ecosystems."

Ben Ainscough, Ph.D., Head of AI and Data Science, Domo

Varo

Varo Bank is a US-based digital bank and uses AI/ML to help make rapid, risk-based decisions to deliver its innovative products and services to customers.

"Varo has a strong commitment to the explainability and transparency of our ML models and we're excited to see the results from Amazon SageMaker Clarify in advancing these efforts."

Sachin Shetty, Head of Data Science, Varo Money

Aurora

LG AI Research aims to lead the next era of AI by using Amazon SageMaker to train and deploy ML models faster.

“We recently debuted Tilda, the AI artist powered by EXAONE, a super giant AI system that can process 250 million high-definition image-text pair datasets. The multi-modality AI allows Tilda to create a new image by itself, with its ability to explore beyond the language it perceives. Amazon SageMaker was essential in developing EXAONE, because of its scaling and distributed training capabilities. Specifically, due to the massive computation required to train this super giant AI, efficient parallel processing is very important. We also needed to continuously manage large-scale data and be flexible to respond to newly acquired data. Using Amazon SageMaker model training and distributed training libraries, we optimized distributed training and trained the model 59% faster—without major modifications to our training code.”

Seung Hwan Kim, Vice President and Vision Lab Leader, LG AI Research

Aurora
“At AI21 Labs we help businesses and developers use cutting-edge language models to reshape how their users interact with text, with no NLP expertise required. Our developer platform, AI21 Studio, provides access to text generation, smart summarization and even code generation, all based on our family of large language models. Our recently trained Jurassic-Grande (TM) model with 17 billion parameters was trained using Amazon SageMaker. Amazon SageMaker made the model training process easier and more efficient, and worked perfectly with DeepSpeed library. As a result, we were able to scale the distributed training jobs easily to hundreds of Nvidia A100 GPUs .The Grande model provides text generation quality on par with our much larger 178 billion parameter model, at a much lower inference cost. As a result, our clients deploying Jurassic-Grande in production are able to serve millions of real-time users on a daily basis, and enjoy the advantage of the improved unit economics without sacrificing user experience.” 

Dan Padnos, Vice President Architecture, AI21 Labs

Aurora

With the help of Amazon SageMaker and the Amazon SageMaker distributed data parallel (SMDDP) library, Torc.ai, an autonomous vehicle leader since 2005, is commercializing self-driving trucks for safe, sustained, long-haul transit in the freight industry.

“My team is now able to easily run large-scale distributed training jobs using Amazon SageMaker model training and the Amazon SageMaker distributed data parallel (SMDDP) library, involving terabytes of training data and models with millions of parameters. Amazon SageMaker distributed model training and the SMDDP have helped us scale seamlessly without having to manage training infrastructure. It reduced our time to train models from several days to a few hours, enabling us to compress our design cycle and bring new autonomous vehicle capabilities to our fleet faster than ever.”

Derek Johnson, Vice President of Engineering, Torc.ai

Aurora

Sophos, a worldwide leader in next-generation cybersecurity solutions and services, uses Amazon SageMaker to train its ML models more efficiently.

“Our powerful technology detects and eliminates files cunningly laced with malware. Employing XGBoost models to process multiple-terabyte-sized datasets, however, was extremely time-consuming—and sometimes simply not possible with limited memory space. With Amazon SageMaker distributed training, we can successfully train a lightweight XGBoost model that is much smaller on disk (up to 25 times smaller) and in memory (up to five times smaller) than its predecessor. Using Amazon SageMaker automatic model tuning and distributed training on Spot Instances, we can quickly and more effectively modify and retrain models without adjusting the underlying training infrastructure required to scale out to such large datasets.”

Konstantin Berlin, Head of Artificial Intelligence, Sophos

Read the blog »

Aurora
"Aurora’s advanced machine learning and simulation at scale are foundational to developing our technology safely and quickly, and AWS delivers the high performance we need to maintain our progress. With its virtually unlimited scale, AWS supports millions of virtual tests to validate the capabilities of the Aurora Driver so that it can safely navigate the countless edge cases of real-world driving." 

Chris Urmson, CEO, Aurora

Watch the video »

Hyundai
"We use computer vision models to do scene segmentation, which is important for scene understanding. It used to take 57 minutes to train the model for one epoch, which slowed us down. Using Amazon SageMaker’s data parallelism library and with the help of the Amazon ML Solutions Lab, we were able to train in 6 minutes with optimized training code on 5ml.p3.16xlarge instances. With the 10x reduction in training time, we can spend more time preparing data during the development cycle." 

Jinwook Choi, Senior Research Engineer, Hyundai Motor Company

Read the blog »

Latent Space
“At Latent Space, we're building a neural-rendered game engine where anyone can create at the speed of thought. Driven by advances in language modeling, we're working to incorporate semantic understanding of both text and images to determine what to generate. Our current focus is on utilizing information retrieval to augment large-scale model training, for which we have sophisticated ML pipelines. This setup presents a challenge on top of distributed training since there are multiple data sources and models being trained at the same time. As such, we're leveraging the new distributed training capabilities in Amazon SageMaker to efficiently scale training for large generative models.”

Sarah Jane Hong, Cofounder/Chief Science Officer, Latent Space

Read the blog »

musixmatch
“Musixmatch uses Amazon SageMaker to build natural language processing (NLP) and audio processing models and is experimenting with Hugging Face with Amazon SageMaker. We choose Amazon SageMaker because it allows data scientists to iteratively build, train, and tune models quickly without having to worry about managing the underlying infrastructure, which means data scientists can work more quickly and independently. As the company has grown, so too have our requirements to train and tune larger and more complex NLP models. We are always looking for ways to accelerate training time while also lowering training costs, which is why we are excited about Amazon SageMaker Training Compiler. SageMaker Training Compiler provides more efficient ways to use GPUs during the training process and, with the seamless integration between SageMaker Training Compiler, PyTorch, and high-level libraries like Hugging Face, we have seen a significant improvement in training time of our transformer-based models going from weeks to days, as well as lower training costs.”

Loreto Parisi, Artificial Intelligence Engineering Director, Musixmatch

AT&T

AT&T Cybersecurity improved threat detection requiring near-real-time predictions using Amazon SageMaker multi-model endpoints.

“Amazon SageMaker multi-model endpoints are not only cost effective, but they also give us a nice little performance boost from simplification of how we store our models.”

Matthew Schneid Chief Architect - AT&T

Read more »
Forethought

Forethought Technologies, a provider of generative AI solutions for customer service, reduced costs by up to 80 percent using Amazon SageMaker.

“By migrating to Amazon SageMaker multi-model endpoints, we reduced our costs by up to 66% while providing better latency and better response times for customers.”

Jad Chamoun, Director of Core Engineering - Forethought Technologies

Read more »
Bazaarvoice

Bazaarvoice reduced ML inference costs by 82% using SageMaker Serverless Inference.

“By using SageMaker Serverless Inference, we can do ML efficiently at scale, quickly getting out a lot of models at a reasonable cost and with low operational overhead.”

Lou Kratz, Principal Research Engineer – Bazaarvoice

Read more »
Tapjoy

Tapjoy uses Amazon SageMaker to deploy ML models in days instead of months.

“We've gone from taking about three to six months to train, build, and deploy a model. Now with SageMaker, we can do that within one week, perhaps even shorter.”

Nick Reffitt, Vice President of Data Science and Engineering - Tapjoy

Read more »
Zendesk

Zendesk hosted thousands of ML models in Amazon SageMaker multi-modal endpoints (MME) for their Suggested Macros feature and achieved 90% cost savings on inference when compared to dedicated endpoints.

“We deployed thousands of ML models, customized for our 100K+ customers, using Amazon SageMaker multi-model endpoints (MME). With SageMaker MME, we built a multi-tenant, SaaS friendly inference capability to host multiple models per endpoint, reducing inference cost by 90% compared to dedicated endpoints.”

Chris Hausler, Head of AI/ML – Zendesk

Read more »

Amazon Pharmacy

“With Amazon SageMaker JumpStart, we were able to experiment with several foundation models, select the ones that best fit our needs in healthcare, and quickly launch ML applications using SageMaker’s HIPAA-compliant model deployment. This has allowed us to improve the speed and scale of the data entry process for prescriptions and of customer care.”

Alexandre Alves, Sr. Principal Engineer, Amazon Pharmacy

Intuit

“With Amazon SageMaker, we can accelerate our Artificial Intelligence initiatives at scale by building and deploying our algorithms on the platform. We will create novel large-scale machine learning and AI algorithms and deploy them on this platform to solve complex problems that can power prosperity for our customers.”

Ashok Srivastava, Chief Data Officer - Intuit

GE Healthcare

Harnessing data and analytics across hardware, software, and biotech, GE Healthcare is transforming healthcare by delivering better outcomes for providers and patients. 

“Amazon SageMaker allows GE Healthcare to access powerful artificial intelligence tools and services to advance improved patient care. The scalability of Amazon SageMaker, and its ability to integrate with native AWS services, adds enormous value for us. We are excited about how our continued collaboration between the GE Health Cloud and Amazon SageMaker will drive better outcomes for our healthcare provider partners and deliver improved patient care.”

Sharath Pasupunuti, AI Engineering Leader - GE Healthcare

ADP, Inc.

ADP is a leading global technology company providing human capital management (HCM) solutions. ADP DataCloud leverages ADP's unmatched workforce data from over 30 million employees to deliver actionable insights that can help executives make real-time decisions to better manage their businesses.

“Retaining and attracting talent is difficult, which is why we continue to enhance ADP DataCloud with artificial intelligence capabilities to help employers maintain strong teams. We use AWS machine learning, including Amazon SageMaker, to quickly identify workforce patterns and predict outcomes before they happen—for example, employee turnover or the impact of an increase in compensation. By leveraging AWS as our primary platform for artificial intelligence and machine learning, we have reduced time to deploy machine learning models from 2 weeks to just 1 day.”

Jack Berkowitz, SVP of Product Development – ADP, Inc.

BASF Digital Farming

BASF Digital Farming has a mission to empower farmers to make smarter decisions and contribute to solving the challenge of feeding a growing world population, while also reducing environmental footprint.

“Amazon SageMaker and related AWS Technology support rapid experimentation and provide easy to use functionality and APIs which lower the entry barrier for ML adoption. This way we can unlock the full value potential of ML use cases quickly.”

Dr. Christian Kerkhoff, Manager Data Automation - BASF Digital Farming GmbH

Cerner

Cerner

Cerner Corporation is a global health and technology company that supplies a variety of health information technology (HIT) solutions, services, devices, and hardware.

“Cerner is proud to drive artificial intelligence and machine learning innovation across a wide range of clinical, financial, and operational experiences. Through new capabilities created by both Cerner’s Machine Learning Ecosystem and Cerner Natural Language Processing, and enabled by our collaboration with AWS, we are accelerating scalable innovation for all our clients. Amazon SageMaker is an important component of enabling Cerner to deliver on our intent to deliver value for our clients through AI/ML. Additionally, Amazon SageMaker provides Cerner with the ability to leverage different frameworks like TensorFlow and PyTorch as well as the ability to integrate with various AWS services.”

Sasanka Are, PhD, Vice President - Cerner

600x400-dow-jones_logo.jpg

Dow Jones

Dow Jones & Co. is a global provider of news and business information, delivering content to consumers and organizations via newspapers, websites, mobile apps, video, newsletters, magazines, proprietary databases, conferences, and radio.

“As Dow Jones continues to focus on integrating machine learning into our products and services, AWS has been a great partner. Leading up to our recent Machine Learning Hackathon, the AWS team provided training to participants on Amazon SageMaker and Amazon Rekognition, and offered day-of support to all the teams. The result was that our teams developed some great ideas for how we can apply machine learning, many of which we’ll continue to develop on AWS. The event was a huge success, and an example of what a great partnership can look like.”

Ramin Beheshti, Group Chief Product and Technology Officer - Dow Jones

Advanced Microgrid Solutions

Advanced Microgrid Solutions

Advanced Microgrid Solutions (AMS) is an energy platform and services company that aims to accelerate the worldwide transformation to a clean energy economy by facilitating the deployment and optimization of clean energy assets. NEM uses a spot market where all parties bid to consume/supply energy every 5 minutes. This requires predicting demand forecasts and coming up with dynamic bids in minutes, while processing massive amounts of market data. To solve this challenge, AMS built a deep learning model using TensorFlow on Amazon SageMaker. They took advantage of Amazon SageMaker's automatic model tuning to discover the best model parameters and build their model in just weeks. Their model demonstrated improvement in market forecasts across all energy products in net energy metering, which will translate into significant efficiencies.

ProQuest

ProQuest

ProQuest curates the world’s largest collection of journals, ebooks, primary sources, dissertations, news, and video – and builds powerful workflow solutions to help libraries acquire and grow their collections. ProQuest products and services are used in academic, K-12, public, corporate, and government libraries in 150 countries.

“We are collaborating with AWS to build a more appealing video user experience for library patrons, enabling their searches to return more relevant results. By working with the AWS ML Solutions Lab, we tested different algorithms using Amazon SageMaker, tuned the models using hyperparameter optimization, and automated the deployment of machine learning (ML) models. We are pleased with the results thus far and are currently considering ML technologies for other products.”

Allan Lu, Vice President of Research Tools, Services & Platforms - ProQuest

Celgene

Celgene is a global biopharmaceutical company committed to improving the lives of patients worldwide. The focus is on the discovery, development, and commercialization of innovative therapies for patients with cancer, immune-inflammatory, and other unmet medical needs.

“At Celgene, our vision is to deliver truly innovative and life-changing treatments and improve the lives of patients worldwide. With Amazon SageMaker and Apache MXNet, building and training deep learning models to develop solutions and processes has been quicker and easier than before and we’re able to easily scale our efforts to discover treatments and produce drugs. Using SageMaker and Amazon EC2 P3 instances has accelerated our time-to-train models and productivity, allowing our team to focus on groundbreaking research and discovery.”

Lance Smith, Director - Celgene

Atlas Van Lines

Atlas Van Lines is the second largest van line in North America, formed in 1948 by a group of entrepreneurs in the moving and storage industry. The organization was developed with the single goal of moving from coast to coast while adhering to the golden rule of business. In addition to a robust footprint, Atlas boasts stringent agent quality requirements that surpass that of the industry.

During peak moving seasons, the Atlas agent network works together across markets to meet customer demand. Traditionally, their ability to forecast capacity was manual and labor intensive. They relied on the wisdom and gut instinct of resources with many years of experience. Atlas had the historical data from 2011 forward and desired to find a way to dynamically adjust capacity and price based on future market demands.

Atlas worked with Pariveda Solutions, an APN Premier Consulting Partner, to help unlock the possibility of proactive capacity and price management in the long-haul moving industry. Pariveda prepared the data, developed and evaluated the machine learning model, and tuned the performance. They used Amazon SageMaker to train and optimize the model, and then exported it using Amazon SageMaker’s modular nature to run using Amazon EC2.

Edmunds

Edmunds

Edmunds.com is a car-shopping website that offers detailed, constantly updated information about vehicles to 20 million monthly visitors.

“We have a strategic initiative to put machine learning into the hands of all of our engineers. Amazon SageMaker is key to helping us achieve this goal, making it easier for engineers to build, train, and deploy machine learning models and algorithms at scale. We are excited to see how Edmunds will use SageMaker to innovate new solutions across the organization for our customers.”

Stephen Felisan, Chief Information Officer - Edmunds.com

Hotels.com

Hotels.com

Hotels.com is a leading global lodging brand operating 90 localized websites in 41 languages.

“At Hotels.com, we are always interested in ways to move faster, to leverage the latest technologies and stay innovative. With Amazon SageMaker, the distributed training, optimized algorithms, and built-in hyperparameter features should allow my team to quickly build more accurate models on our largest data sets, reducing the considerable time it takes us to move a model to production. It is simply an API call. Amazon SageMaker will significantly reduce the complexity of machine learning, enabling us to create a better experience for our customers, fast.”

Matt Fryer, VP and Chief Data Science Officer - Hotels.com and Expedia Affiliate Network

Formosa Plastics

Formosa Plastics Corporation is a growing, vertically integrated supplier of plastic resins and petrochemicals. Formosa Plastics offers a full line of polyvinyl chloride, polyethylene and polypropylene resins, caustic soda, and other petrochemicals that deliver the consistency, performance, and quality that customers demand.

"Formosa Plastics is one of Taiwan’s top petrochemical companies and ranks among the world's leading plastics manufacturers. We decided to explore machine learning to enable more accurate detection of defects and reduce manual labor costs, and we turned to AWS as our preferred cloud provider to help us do that. The AWS ML Solutions Lab worked with us through every step of the process, from a discovery workshop to define the business use cases to the building and selection of appropriate ML models to the actual deployment. Using Amazon SageMaker, the machine learning solution reduced our employee time spent doing manual inspection in half. With the Solutions Lab’s help, we are now able to optimize the SageMaker model ourselves going forward as conditions change.”

Bill Lee, Assistant Vice President - Formosa Plastics Corporation

Voodoo

Voodoo is a leading mobile gaming company with over 2 billion game downloads and over 400 million monthly active users (MAU). They run their own advertising platform and are using machine learning to improve the accuracy and quality of ad bids that are shown to their users.

"At Voodoo, we need to keep a millions-and-growing player base actively engaged. By standardizing our machine learning and artificial intelligence workloads on AWS, we’re able to iterate at the pace and scale we need to continue growing our business and engaging our gamers. Using Amazon SageMaker, we can decide in real time which ad should be shown to our players and invoke our endpoint over 100 million times by over 30 million users daily, representing close to a billion predictions per day. With AWS machine learning, we were able to put an accurate model into production in less than a week, supported by a small team, and have been able to build on top of it continuously as our team and business grow.”

Aymeric Roffé, Chief Technology Officer – Voodoo

Regit

Formerly Motoring.co.uk, Regit is an automotive tech firm and the UK’s leading online service for motorists. They deliver digital car management services based on a car’s registration plate, and provide drivers with informative reminders such as Ministry of Transport (MOT) tax, insurance, and recalls.

Regit worked with Peak Business Insight, an APN Advanced Consulting Partner, to apply “Categorical Machine Learning models” that handle both category and variable data simultaneously to give predictions about the likelihood of users changing cars, resulting in a sale for Regit.

Peak used AWS services such as Amazon SageMaker for real-time ingestion, modeling, and data output. Amazon SageMaker handles 5,000 API requests a day for Regit, seamlessly scaling and adjusting to relevant data requirements and managing the delivery of lead scoring results. Meanwhile, Amazon Redshift and Amazon Elastic Compute Cloud (Amazon EC2) instances efficiently and continuously optimize model performance and results. With Peak, Regit has been able to predict which of its 2.5 million users are going to change cars and when. This means they can serve customers in a more personalized and targeted way, increasing call center revenues by more than a quarter.

Realtor.com

Realtor.com

The Move, Inc. network, which includes realtor.com®, Doorsteps®, and Moving.com™, provides real estate information, tools, and professional expertise across a family of websites and mobile experiences for consumers and real estate professionals.

“We believe that Amazon SageMaker is a transformative addition to the realtor.com® toolset as we support consumers along their homeownership journey. Machine learning workflows that have historically taken a long time, like training and optimizing models, can be done with greater efficiency and by a broader set of developers, empowering our data scientists and analysts to focus on creating the richest experience for our users."

Vineet Singh, Chief Data Officer and Senior Vice President - Move, Inc.

Grammarly

Grammarly

Every day Grammarly’s algorithms help millions of people communicate more effectively by offering writing assistance on multiple platforms across devices, through a combination of natural language processing and advanced machine learning technologies.

“Amazon SageMaker makes it possible for us to develop our TensorFlow models in a distributed training environment. Our workflows also integrate with Amazon EMR for pre-processing, so we can get our data from Amazon S3, filtered with EMR and Spark from a Jupyter notebook, and then train in Amazon SageMaker with the same notebook. SageMaker is also flexible for our different production requirements. We can run inferences on SageMaker itself, or if we need just the model, we download it from S3 and run inferences of our mobile device implementations for iOS and Android customers.”

Stanislav Levental, Technical Lead - Grammarly

Slice Labs

Slice Labs, based in New York with worldwide operations, is the first on-demand insurance cloud platform provider. Slice serves the B2C market with individual on-demand insurance offerings, as well as the B2B market by enabling companies to build intuitive digital insurance products.

“At Slice, we are keenly aware of the ever-changing nature of customers’ insurance needs, and we’ve selected AWS as our go-to cloud platform because of its broad swath of services, flexibility, and strong reputation among insurers. We use a wide variety of AWS services to support our business, including AWS machine learning to help connect customers with the best insurance options given their needs. In our work with insurers and technology companies seeking to build and launch intelligent insurance products, we’ve seen tremendous cost savings and productivity benefits with AWS. For example, we’ve reduced procurement time by 98%, from 47 days to 1 day. We’re excited to continue expanding both geographically and in terms of our cloud use with AWS."

Philippe Lafreniere, Chief Growth Officer - Slice Labs

DigitalGlobe

DigitalGlobe

As the world’s leading provider of high-resolution Earth imagery, data, and analysis, DigitalGlobe works with enormous amounts of data every day.

“As the world’s leading provider of high-resolution Earth imagery, data, and analysis, DigitalGlobe works with enormous amounts of data every day. DigitalGlobe is making it easier for people to find, access, and run compute against our entire 100PB image library, which is stored in the AWS cloud, to apply deep learning to satellite imagery. We plan to use Amazon SageMaker to train models against petabytes of Earth observation imagery datasets using hosted Jupyter notebooks, so DigitalGlobe's Geospatial Big Data Platform (GBDX) users can just push a button, create a model, and deploy it all within one scalable distributed environment at scale.”

Dr. Walter Scott, Chief Technology Officer - Maxar Technologies and founder of DigitalGlobe

Intercom

Intercom

Intercom’s messaging-first products integrate seamlessly with other companies' websites and mobile apps to help them acquire, engage, and support customers. Founded in 2011, the company has offices in San Francisco, London, Chicago, and Dublin.

“At Intercom, we have a growing team of data scientists and data-oriented engineers, and we often want to iterate quickly and explore new solutions for data-driven products. Prior to Amazon SageMaker, we tried a lot of different options to build these products but each one came with challenges – code sharing was hard, testing on big datasets was slow, and provisioning and managing hardware on our own was problematic. SageMaker came along and solved all that for us. We use it in particular to develop algorithms for our search platforms and machine learning features, and we find SageMaker's hosted Jupyter notebooks allow us to build and iterate rapidly. Crucially, the fact that SageMaker is a managed service allows my team to focus on the task at hand. Amazon SageMaker is an extremely valuable service to us at Intercom, and we're excited to continue using more and more as our company grows."

Kevin McNally, Senior Data Scientist Machine Learning - Intercom

Kinect Energy Group

Kinect Energy Group

Kinect Energy Group is a subsidiary of World Fuel Services, a Fortune 100 company that provides energy procurement advisory services, supply fulfillment, and transaction and payment management solutions to commercial and industrial customers, principally in the aviation, marine, and land transportation industries. Kinect Energy is a key Nordic energy provider and is dependent on the natural power resources enabled by the region’s windy climate.

The business has recently catapulted forward with the introduction of a number of AI / ML services from AWS. With Amazon SageMaker, the company can predict the upcoming weather trends and therefore the prices of future months’ electricity, enabling unprecedented long-range energy trading that represents an industry-leading forward-thinking approach.

“We started using Amazon SageMaker and with the help of the AWS ML Solutions Team and the Solutions Architecture Team, we picked up momentum with Innovation Day and the impact has been tremendous ever since. We’ve grown our own AI team several times to fully exploit the new advantage that AWS technologies provide. We’re profiting in new ways by setting prices based on weather that hasn’t yet happened. We’ve gone 'all in' with AWS, including storing our data in S3, using Lambda for execution, and step functions in addition to SageMaker. And, thanks to the AWS ML Solutions Lab’s committed partnership, we are now self-sufficient, able to iterate on the models we’ve built and continue improving our business.”

Andrew Stypa, Lead Business Analyst - Kinect Energy Group

Frame.io

Frame.io

Frame.io is your hub for all things video. The leader in video review and collaboration with 700K+ customers globally, Frame.io is where video professionals across the entire spectrum — from freelance to enterprise — come to review, approve, and deliver video.

“As a cloud-native video review and collaboration platform accessible to users all over the world, it's imperative we provide best-in-class security to our customers. With the anomaly detection model built into Amazon SageMaker, we are able to leverage machine learning to quickly identify, detect, and block any unwanted IP requests to ensure our client's media remains secure and protected at all times. Getting started with Amazon SageMaker, maintaining it over time, scaling it across our platform, and adjusting to our specific workflows has been simple and straightforward. And, with the help of Jupyter notebooks in SageMaker, we've been able to experiment with different models to improve our precision and recall in ways that make Frame.io even more secure.”

Abhinav Srivastava, VP and Head of Information Security - Frame.io

Cookpad

Cookpad

Cookpad is Japan’s largest recipe-sharing service, with about 60 million monthly users in Japan and about 90 million monthly users globally.

“With the increasing demand for easier use of Cookpad’s recipe service, our data scientists will be building more machine learning models in order to optimize the user experience. Attempting to minimize the number of training job iterations for best performance, we recognized a significant challenge in the deployment of ML inference endpoints, which was slowing down our development processes. To automate the ML model deployment such that data scientists could deploy models by themselves, we used Amazon SageMaker’s inference APIs and proved that Amazon SageMaker would eliminate the need for application engineers to deploy ML models. We anticipate automating this process with Amazon SageMaker in production.” 

Yoichiro Someya, Research Engineer - Cookpad

Fabulyst

Fabulyst

Fabulyst is an India-based startup focusing on fashion commerce that enables more positive and personalized experiences for shoppers and better conversions for retailers through AI.

“Fabulyst makes it easier for shoppers to find the perfect purchases by matching inventory items to users’ specific, personalized queries (e.g., suiting their body type or skin tone). At the same time, we help retailers to achieve more effective conversions by using computer vision to forecast monthly trends based on data from social media, search, blogs, etc. and auto-tagging those trends within our retail customers’ catalogs. Fabulyst uses AWS to deliver our best-in-class solutions, including Amazon SageMaker to handle the many predictions that support our offerings. Relying on SageMaker and other AWS services, we are able to guarantee value to our users – such as a 10% boost in incremental revenue for retailers – and have confidence in our ability to deliver super results every time.”

Komal Prajapati, Founder & CEO - Fabulyst

Terragon Group

Terragon Group

Terragon Group is a data and marketing technology business that unlocks value for businesses using insight to reach the mobile audience in Africa. Over the years, Terragon Group has become a leader in the mobile space serving local and multi-national brands, spanning across multiple geographies. Delivering the right ad message to the right user at the right moment requires personalization, and Terragon uses data, insights, and artificial intelligence to help businesses reach the right audience in Africa.

“Amazon SageMaker provides an end-to-end machine learning workflow for us without the need for any underlying infrastructure plumbing. Our data science and machine learning teams are able to go quickly from data exploration to model training and production in just a couple of hours. For a business based in Africa with scarce engineering talent, there’s no other way we would have been able to build and deploy ML models solving real-life problems in less than 90 days.”

Deji Balogun, CTO - Terragon Group

SmartNews

SmartNews

SmartNews is the largest news app in Japan, delivering quality information to more than 11 million monthly active users in the world. With machine learning technologies, SmartNews helps users with the most relevant and interesting news stories. The machine learning algorithms at SmartNews evaluate millions of articles, social signals, and human interactions to deliver the top 0.01% of stories that matter most, right now.

"Our mission to discover and deliver quality stories to the world is powered by AWS and particularly Amazon SageMaker, which has helped us accelerate the development cycle to serve our customers. Using Amazon SageMaker has helped us immensely in our news curation methods, including article classification using deep learning, predicting Life Time Value, and composite modeling for text and image. We look forward to achieving greater heights with Amazon SageMaker and other AI solutions from AWS.”

Kaisei Hamamoto, Co-Founder and Co-CEO - SmartNews, Inc.

Pioneer

Pioneer

Pioneer is a multinational corporation that specializes in digital entertainment, including car electronics and mobility services. Pioneer is driven by its corporate philosophy of "Move the Heart and Touch the Soul" and provides its customers with products and services that can help them in their everyday lives.

“Leveraging Amazon SageMaker and the model training features such as Automatic Model Tuning, we were able to develop highly accurate machine learning models, and continue to ensure privacy for our customers. We are also looking forward to leveraging AWS Marketplace for machine learning for both algorithms and pre-trained models to build a monetization platform."

Kazuhiro Miyamoto, General Manager Information Service Engineering Department - Pioneer

dely

Dely

Dely is running Japan's best cooking video service, Kurashiru. It strives every day to make culinary services that impact the world. Kurashiru helps many people per day by introducing a variety of tasty food recipes that color the dining table with cooking videos. Tens of millions of people watch and listen to the monthly recipe service in Japan.

“We exceeded 15 million downloads of our mobile app, in 2.5 years since we launched the popular Kurashiru service. We believe it is critical to deliver the right content to our users at the right time using advanced technologies such as machine learning. To achieve this, we used Amazon SageMaker, which helped us build and deploy the machine learning models in production in 90 days. We also improved the click-through rate by 15% with content personalization”.

Masato Otake, CTO - Dely, Inc.

Ayla Networks

Ayla Networks

Ayla Networks is a San Francisco-based IoT platform-as-a-service software company that develops solutions for both the consumer and commercial markets.

“At Ayla Networks, we find our customers primarily run on AWS infrastructure due to its proven scalability and reliability. In particular, we see that commercial manufacturers are leveraging Amazon SageMaker to harness the equipment performance data from the Ayla Cloud. With Amazon SageMaker and our Ayla IQ product, businesses can reveal insights and anomalies that lead to better product and service quality, even insofar as predicting machine failures and remedying them before they can occur. This solution keeps our customers running smoothly so their businesses can keep growing, producing, and scaling without worry.”

Prashanth Shetty, VP of Global Marketing - Ayla Networks

FreakOut

FreakOut

FreakOut is a leading technology company focused around digital advertisements. The company offers products for real-time ad inventory transactions in internet advertising as well as data analysis for browsing the web. FreakOut leverages machine learning for click-through rate (CTR) and conversion rate (CVR) predictions.

“We are in the process of migrating machine learning training environments from on premises to Amazon SageMaker. Amazon SageMaker offers us a more scalable solution for our business. With the Automatic Model Tuning feature from Amazon SageMaker, we can optimize and estimate highly accurate models for our requirements."

Jiro Nishiguchi, CTO - FreakOut

Wag!

Wag!

"At Wag!, we have to meet the supply-and-demand needs in a two-sided marketplace. We saw an opportunity to use machine learning — powered by AWS — to predict the dog walking demand of our customers. By standardizing our machine learning applications on AWS, we are able to meet the continued growth of our business needs by iterating at a vastly improved pace and scale despite limited engineering resources. Using Amazon SageMaker, we can speed up our machine learning experimentation, compressing 45 days’ worth of computational time training the model into 3 days.”

Dave Bullock, VP of Technology of Engineering and Operations - Wag Labs Inc.

Print

Infoblox

Infoblox is the leader in secure cloud-managed network services, designed to manage and secure the networking core, namely DNS, DHCP, and IP address management (collectively known as DDI).

"At Infoblox, we built a DNS security analytics service with Amazon SageMaker that detects malicious actors that create homographs to impersonate highly valued domain name targets and use them to drop malware, phish user information, and attack the reputation of a brand. AWS is our enterprise standard for cloud, and we can leverage multiple features offered by SageMaker to accelerate ML model development. Using SageMaker Automatic Model Tuning capabilities, we've scaled our experimentation and improved accuracy to 96.9%. Thanks to SageMaker, our IDN homographs detector, a part of our security analytics service, has identified over 60 million resolutions of homograph domains, and continues to find millions more each month, which helps our customers detect brand abuse faster."

Femi Olumofin, Analytics Architect - Infoblox

NerdWallet

NerdWallet

NerdWallet, a personal finance company based in San Francisco, provides reviews and comparisons of financial products, including credit cards, banking, investing, loans, and insurance.

"NerdWallet relies on data science and ML to connect customers with personalized financial products. We chose to standardize our ML workloads on AWS because it allowed us to quickly modernize our data science engineering practices, removing roadblocks and speeding time-to-delivery. With Amazon SageMaker, our data scientists can spend more time on strategic pursuits and focus more energy where our competitive advantage is—our insights into the problems we're solving for our users.”

Ryan Kirkman, Senior Engineering Manager - NerdWallet

Splice

Splice

Splice is a creative platform for musicians, built by musicians, to empower artists to unleash their true creative potential. The subscription-based music creation startup was founded in 2013 and now caters to more than 3 million musicians that explore the catalog in search of the perfect sounds.

"As our catalog of sounds and presets grows, so does the challenge of finding the right sound. That’s why Splice has invested in building best-in-class search and discovery capabilities. By standardizing our ML workloads on AWS, we created a newer user-facing offering that aims to make it easier than ever to connect musicians with the sounds they’re looking for. Since the launch of Similar Sounds, we’ve seen nearly a 10 percent increase in search conversions. Leveraging Amazon SageMaker, we’ve created the perfect complement to text-based search, allowing our users to discover and navigate our catalog in ways that weren’t possible before.”

Alejandro Koretzky, Head of Machine Learning & Principal Engineer - Splice

Audeosoft

Audeosoft

"Before we started our machine learning journey, we only had the ability to search text of a curriculum vitae (CV), but our lack of optical character recognition capabilities meant that not every CV was searchable. With Amazon Textract, we can now extract content in every kind of document and we have the competence to index all uploaded files in an Elasticsearch cluster. Now every uploaded document is searchable using Elasticsearch, providing search speeds 10 times faster than the original SQL search. In addition, we implemented word vectoring using Amazon SageMaker to add related keywords to a search query. This process allows us to accurately classify and qualify candidates and helps us eliminate errors caused by synonyms or alternative wordings used in CVs. Using Amazon SageMaker and Amazon Textract, we can deliver smarter and better-quality candidates for recruiters. Stable performance, worldwide availability, and reliability are key success factors for Audeosoft. When we made the decision almost 8 years ago to partner with AWS, we knew that they would be an excellent partner for the future. By selecting AWS as our preferred cloud provider, we have a partner that has the same drive and the same desire to create innovation as we do for years to come.”

Marcel Schmidt, CTO - Audeosoft

Freshworks

Freshworks

Freshworks is a US / India based B2B SaaS unicorn catering to small and medium-sized businesses (SMB) and mid-market businesses worldwide. Freshworks offers a portfolio of simple to use, yet powerful applications for customer engagement and employee engagement workflows.

"At Freshworks, we have built our flagship AI/ML offering, Freddy AI Skills, with hyper-personalized models that help agents address user queries and resolve support tickets successfully, sales and marketing teams prioritize opportunities and quickly close deals, and customer success managers reduce churn risk and grow the business. We chose to standardize our ML workloads on AWS because we could easily build, train, and deploy machine learning models optimized for our customers' use cases. Thanks to Amazon SageMaker, we have built more than 30,000 models for 11,000 customers while reducing training time for these models from 24 hours to under 33 minutes. With SageMaker Model Monitor, we can keep track of data drifts and retrain models to ensure accuracy. Powered by Amazon SageMaker, Freddy AI Skills is constantly evolving with smart actions, deep-data insights, and intent-driven conversations."

Tejas Bhandarkar, Senior Director of Product - Freshworks Platform

Veolia

Veolia

Veolia Water Technologies is an experienced design company and a specialized provider of technological solutions and services in water and wastewater treatment.

"In eight short weeks, we worked with AWS to develop a prototype that anticipates when to clean or change water filtering membranes in our desalination plants. Using Amazon SageMaker, we built an ML model that learns from previous patterns and predicts the future evolution of fouling indicators. By standardizing our ML workloads on AWS, we were able to reduce costs and prevent downtime while improving the quality of the water produced. These results couldn’t have been realized without the technical experience, trust, and dedication of both teams to achieve one goal: an uninterrupted clean and safe water supply."

Aude GIARD, Chief Digital Officer - Veolia Water Technologies

Sportradar

Sportradar

Sportradar, a leading sports data provider, delivers real-time sports data to over 65 leagues across the globe. In an effort to generate cutting-edge insights, the company collaborated with the Amazon ML Solutions Lab to develop a soccer goal predictor.

“We deliberately threw one of the hardest possible computer vision problems at the Amazon ML Solutions Lab team to test the capabilities of AWS machine learning, and I am very impressed with the results. The team built an ML model to predict soccer goals 2 seconds in advance of the live gameplay using Amazon SageMaker. This model alone has opened doors to many new business opportunities for us. We look forward to standardizing our ML workloads on AWS because we can build, train, and deploy models that promote innovation in our business and meet our cost and latency requirements.”  

Ben Burdsall, CTO - Sportradar

Roche

Roche

F. Hoffmann-La Roche AG (Roche) is a Swiss multinational life science company specializing in pharmaceuticals and diagnostics.

“I wanted to push my teams to systematize our ML workflows in the cloud, so we worked with the Machine Learning Solutions Lab to deliver Amazon SageMaker workshops, demonstrating how SageMaker streamlines the ML production process for data scientists. Since the workshop, 80% of our ML workloads run on AWS, which helps our teams bring ML models to production three times faster. SageMaker and the AWS stack empower us to use compute resources to train on demand without being constrained by on-premises availability.”  

Gloria Macia, Data Scientist - Roche

Guru_Logos

Guru

“At Guru, we believe the knowledge you need to do your job should find you. We are a knowledge management solution that captures your team's most valuable information and organizes it into a single source of truth. We leverage AI to recommend knowledge to you in real time where you work, ensure it stays verified, and help you better manage your overall knowledge base. Our growing product data science team faces all of the challenges of the modern-day ML team — building, training, and deploying ML systems at scale — and we rely on Amazon SageMaker as a means of overcoming some of these challenges. We currently leverage SageMaker Inference to more quickly deploy our ML models to production, where they help us to meet our number-one goal — provide value to our customers.”  

Nabin Mulepati, Staff ML Engineer - Guru

Amazon Operations

Amazon Operations

As a part of Amazon’s commitment to the safety of their associates during the COVID-19 pandemic, the Amazon Operations team has deployed an ML solution to help maintain social distancing protocols in over 1,000 operations buildings worldwide. Amazon Operations collaborated with the Amazon Machine Learning Solutions Lab to create state-of-the-art computer vision models for distance estimation using Amazon SageMaker.

“By standardizing our ML workloads on AWS and working with the experts at the ML Solutions Lab, we created an innovative set of models that we estimate could save up to 30% of our manual review effort. Using Amazon SageMaker empowers us to spend more time focused on safety and increasing accuracy by reducing the need for hundreds of hours of manual review per day.”

Russell Williams, Director, Software Development - Amazon OpsTech IT

Freddy's

Freddy’s Frozen Custard & Steakburgers

Freddy’s Frozen Custard & Steakburgers is a fast-casual restaurant that offers a unique combination of cooked-to-order steakburgers, Vienna Beef hot dogs, shoestring fries, and other savory items along with freshly churned frozen custard treats. Founded in 2002 and franchised in 2004, Freddy’s currently has nearly 400 restaurants across 32 states.

“Previously, we would simply pick two restaurants that looked similar, but now we have a true understanding of the relationships between our menu items, customers, and locations. Amazon SageMaker Autopilot, which powers Domo’s new ML capability, has been a force multiplier for our marketing and purchasing teams to try new ideas and improve our customers’ experience.”

Sean Thompson, IT Director – Freddy’s

Freddy's

Vanguard

"We’re excited that our Vanguard data scientists and data engineers can now collaborate in a single notebook for analytics and machine learning. Now that Amazon SageMaker Studio has built-in integrations with Spark, Hive, and Presto all running on Amazon EMR, our development teams can be more productive. This single development environment will allow our teams to focus on building, training, and deploying machine learning models.”

Doug Stewart, Senior Director of Data and Analytics – Vanguard

Freddy's

Provectus

"We have been waiting for a feature to create and manage Amazon EMR clusters directly from Amazon SageMaker Studio so that our customers could run Spark, Hive, and Presto workflows directly from Amazon SageMaker Studio notebooks. We are excited that Amazon SageMaker has now natively built this capability to simplify management of Spark and machine learning jobs. This will help our customers’ data engineers and data scientists collaborate more effectively to perform interactive data analysis and develop machine learning pipelines with EMR-based data transformations."

Stepan Pushkarev, CEO – Provectus

Freddy's

Climate

"At Climate, we believe in providing the world’s farmers with accurate information to make data driven decisions and maximize their return on every acre. To achieve this, we have invested in technologies such as machine learning tools to build models using measurable entities known as features, such as yield for a grower’s field. With Amazon SageMaker Feature Store, we can accelerate the development of ML models with a central feature store to access and reuse features across multiple teams easily. SageMaker Feature Store makes it easy to access features in real-time using the online store or run features on a schedule using the offline store for different use cases. With the SageMaker Feature Store, we can develop ML models faster.”


Atul Kamboj, Senior Data Scientist - iCare, NSW Government Insurance and Care agency, AustraliaDaniel McCaffrey, Vice President, Data and Analytics, Climate

Featured customers - 26

Experian

“At Experian, we believe it is our responsibility to empower consumers to understand and use credit in their financial lives, and assist lenders in managing credit risk. As we continue to implement best practices to build our financial models, we are looking at solutions that accelerate the production of products that leverage machine learning. Amazon SageMaker Feature Store provides us with a secure way to store and reuse features for our ML applications. The ability to maintain consistency for both real-time and batch applications across multiple accounts is a key requirement for our business. Using the new capabilities of Amazon SageMaker Feature Store enables us to empower our customers to take control of their credit and reduce costs in the new economy.”

Geoff Dzhafarov, Chief Enterprise Architect, Experian Consumer Services

Freddy's

Dena

"At DeNA, our mission is to deliver impact and delight using the internet and AI/ML. Providing value-based services is our primary goal and we want to ensure our businesses and services are ready to achieve that goal. We would like to discover and reuse features across the organization and Amazon SageMaker Feature Store helps us with an easy and efficient way to reuse features for different applications. Amazon SageMaker Feature Store also helps us in maintaining standard feature definitions and helps us with a consistent methodology as we train models and deploy them to production. With these new capabilities of Amazon SageMaker, we can train and deploy ML models faster, keeping us on our path to delight our customers with the best services.”

Kenshin Yamada, General Manager / AI System Dept System Unit, DeNA

Freddy's

United Airlines

“At United Airlines, we use machine learning (ML) to improve customer experience by providing personalized offers, enabling customers to be ready using Travel Readiness Center. Our use of ML also extends to airport operations, network planning, flight scheduling. As we were coming out of the pandemic, Amazon SageMaker played a critical role in Travel Readiness Center allowing us to handle large volumes of COVID test certificates, vaccine cards using document-based model automation. With Amazon SageMaker’s new governance capabilities, we have increased control and visibility over our machine learning models. SageMaker Role Manager simplifies the user setup process dramatically by providing baseline permissions and ML activities for each persona linked to IAM roles. With SageMaker Model Cards, our teams can proactively capture and share model information for review, and using SageMaker Model Dashboard, we are able to search and view models deployed on MARS—our internal ML platform. With all these new governance capabilities, we are saving significant amount of time and able to scale up.”

Ashok Srinivas, Director of ML Engineering and Ops, United Airlines

Freddy's

Capitec

“At Capitec, we have a wide range of data scientists across our product lines, building different ML solutions. Our ML engineers manage a centralized modeling platform built on Amazon SageMaker to empower the development and deployment of all these ML solutions. Without any built-in tools, tracking modeling efforts tends towards disjointed documentation and a lack of model visibility. With SageMaker Model Cards, we can track plenty of model metadata in a unified environment, and SageMaker Model Dashboard affords us visibility into the performance of each model. In addition, SageMaker Role Manager simplifies the process of managing access for data scientists in our different product lines. Each of these contribute towards our model governance being sufficient to warrant the trust that our clients place in us as a financial service provider.”

Dean Matter, ML Engineer, Capitec Bank

Freddy's

Lenovo

Lenovo™, the #1 global PC maker, recently incorporated Amazon SageMaker into its latest predictive maintenance offering.  Ashok Srinivas, Director of ML Engineering and Ops, United Airlines.

"The new SageMaker Edge Manager will help eliminate the manual effort required to optimize, monitor, and continuously improve the models after deployment. With it, we expect our models will run faster and consume less memory than with other comparable machine-learning platforms. SageMaker Edge Manager allows us to automatically sample data at the edge, send it securely to the cloud, and monitor the quality of each model on each device continuously after deployment. This enables us to remotely monitor, improve, and update the models on our edge devices around the world and at the same time saves us and our customers' time and costs."

Igor Bergman, Lenovo Vice President, Cloud & Software of PCs and Smart Devices.

Freddy's

Basler AG

Basler AG is a leading manufacturer of high-quality digital cameras and accessories for industry, medicine, transportation and a variety of other markets.

“Basler AG delivers intelligent computer vision solutions in a variety of industries, including manufacturing, medical, and retail applications. We are excited to extend our software offering with new features made possible by Amazon SageMaker Edge Manager. To ensure our machine learning solutions are performant and reliable, we need a scalable edge to cloud MLOps tool that allows us to continuously monitor, maintain, and improve machine learning models on edge devices. SageMaker Edge Manager allows us to automatically sample data at the edge, send it securely to the cloud, and monitor the quality of each model on each device continuously after deployment. This enables us to remotely monitor, improve, and update the models on our edge devices around the world and at the same time saves us and our customers' time and costs."

Mark Hebbel, Head of Software Solutions at Basler.

NWG_LOGO_HZ_POS_RGB_forpsd

NatWest Group

NatWest Group, a major financial services institution, standardized its ML model development and deployment process across the organization, reducing the turnaround cycle to create new ML environments from 40 days to 2 days and accelerating time to value for ML use cases from 40 to 16 weeks.

Freddy's

AstraZeneca

"Rather than creating many manual processes, we can automate most of the machine learning development process simply within Amazon SageMaker Studio." 

Cherry Cabading, Global Senior Enterprise Architect – AstraZeneca

Freddy's

Janssen

Employing AWS services, including Amazon SageMaker, Janssen implemented an automated MLOps process that improved the accuracy of model predictions by 21 percent and increased the speed of feature engineering by approximately 700 percent, helping Janssen to reduce costs while increasing efficiency.
Freddy's

Qualtrics

“Amazon SageMaker improves the efficiency of our MLOps teams with the tools required to test and deploy machine learning models at scale.”

Samir Joshi, ML Engineer – Qualtrics

Freddy's

Deloitte

"Amazon SageMaker Data Wrangler enables us to hit the ground running to address our data preparation needs with a rich collection of transformation tools that accelerate the process of ML data preparation needed to take new products to market. In turn, our clients benefit from the rate at which we scale deployed models, enabling us to deliver measurable, sustainable results that meet the needs of our clients in a matter of days rather than months."

Frank Farrall, Principal, AI Ecosystems and Platforms Leader, Deloitte

Freddy's

NRI

"As an AWS Premier Consulting Partner, our engineering teams are working very closely with AWS to build innovative solutions to help our customers continuously improve the efficiency of their operations. ML is the core of our innovative solutions, but our data preparation workflow involves sophisticated data preparation techniques which, as a result, take a significant amount of time to become operationalized in a production environment. With Amazon SageMaker Data Wrangler, our data scientists can complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization, which helps us accelerate the data preparation process and easily prepare our data for ML. With Amazon SageMaker Data Wrangler, we can prepare data for ML faster."

Shigekazu Ohmoto, Senior Corporate Managing Director, NRI Japan

Freddy's

Equilibrium

"As our footprint in the population health management market continues to expand into more health payors, providers, pharmacy benefit managers, and other healthcare organizations, we needed a solution to automate end-to-end processes for data sources that feed our ML models, including claims data, enrollment data, and pharmacy data. With Amazon SageMaker Data Wrangler, we can now accelerate the time it takes to aggregate and prepare data for ML using a set of workflows that are easier to validate and reuse. This has dramatically improved the delivery time and quality of our models, increased the effectiveness of our data scientists, and reduced data preparation time by nearly 50%. In addition, SageMaker Data Wrangler has helped us save multiple ML iterations and significant GPU time, speeding the entire end-to-end process for our clients as we can now build data marts with thousands of features including pharmacy, diagnosis codes, ER visits, inpatient stays, as well as demographic and other social determinants. With SageMaker Data Wrangler, we can transform our data with superior efficiency for building training datasets, generate data insights on datasets prior to running ML models, and prepare real-world data for inference/predictions at scale.”

Lucas Merrow, CEO, Equilibrium Point IoT

Freddy's

icare Insurance and Care NSW

iCare is a NSW Government agency that provides workers compensation insurance to more than 329,000 public and private sector employers in NSW, Australia and their 3.2 million employees. In addition, iCare insures builders and homeowners, provides treatment and care to people severely injured on NSW roads; and protect more than $266.6 billion of NSW Government assets, including the Sydney Opera House, the Sydney Harbour Bridge, schools and hospitals.

“At Insurance and care (iCare) NSW, our vision is to change the way people think about insurance and care. Amazon SageMaker has enabled iCare to build and train deep learning models for early identification of long-term dust disease patients. This early identification can prevent life threatening conditions. As per previous studies, signs of silicosis were missed or not able to be detected in 39% of the patients. The AI assisted diagnosis have enabled the doctors to correctly identify 80% of the cases as compared to 71% un-assisted diagnosis. After implementing this project we are resourcing Amazon SageMaker to develop solutions and processes in other projects as that has proven to be quicker and easier than before and we’re able to easily scale our efforts to provide care to the people of NSW.”

Atul Kamboj, Senior Data Scientist - iCare, NSW Government Insurance and Care agency, Australia