AWS Machine Learning Blog
Build, test, and deploy your Amazon Sagemaker inference models to AWS Lambda
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models at any scale. When you deploy an ML model, Amazon SageMaker leverages ML hosting instances to host the model and provides an API endpoint to provide inferences. It may also […]
Multiregion serverless distributed training with AWS Batch and Amazon SageMaker
Creating a global footprint and access to scale are one of the many best practices at AWS. By creating architectures that take advantage of that scale and also efficient data utilization (in both performance and cost), you can start to see how important access is at scale. For example, within autonomous vehicles (AV) development, data is geographically […]
Building a deep neural net–based surrogate function for global optimization using PyTorch on Amazon SageMaker
July 2023: This post was reviewed for accuracy. Optimization is the process of finding the minimum (or maximum) of a function that depends on some inputs, called design variables. Customer X has the following problem: They are about to release a new car model to be designed for maximum fuel efficiency. In reality, thousands of […]
Launching TensorFlow distributed training easily with Horovod or Parameter Servers in Amazon SageMaker
Amazon SageMaker supports all the popular deep learning frameworks, including TensorFlow. Over 85% of TensorFlow projects in the cloud run on AWS. Many of these projects already run in Amazon SageMaker. This is due to the many conveniences Amazon SageMaker provides for TensorFlow model hosting and training, including fully managed distributed training with Horovod and […]
Performing batch inference with TensorFlow Serving in Amazon SageMaker
After you’ve trained and exported a TensorFlow model, you can use Amazon SageMaker to perform inferences using your model. You can either: Deploy your model to an endpoint to obtain real-time inferences from your model. Use batch transform to obtain inferences on an entire dataset stored in Amazon S3. In the case of batch transform, […]
Optimizing TensorFlow model serving with Kubernetes and Amazon Elastic Inference
Note: Amazon Elastic Inference is no longer available. Please see Amazon SageMaker for similar capabilities. This post offers a dive deep into how to use Amazon Elastic Inference with Amazon Elastic Kubernetes Service. When you combine Elastic Inference with EKS, you can run low-cost, scalable inference workloads with your preferred container orchestration system. Elastic Inference […]
Tracking the throughput of your private labeling team through Amazon SageMaker Ground Truth
Launched at AWS re:Invent 2018, Amazon SageMaker Ground Truth helps you quickly build highly accurate training datasets for your machine learning models. Amazon SageMaker Ground Truth offers easy access to public and private human labelers, and provides them with built-in workflows and interfaces for common labeling tasks. Additionally, Amazon SageMaker Ground Truth can lower your […]
Enable smart text analytics using Amazon OpenSearch Service and Amazon Comprehend
September 8, 2021: Amazon Elasticsearch Service has been renamed to Amazon OpenSearch Service. See details. We’re excited to announce an end-to-end solution that leverages natural language processing to analyze and visualize unstructured text in your Amazon OpenSearch Service domain with Amazon Comprehend in the AWS Cloud. You can deploy this solution in minutes with an […]
Build a custom entity recognizer using Amazon Comprehend
Amazon Comprehend is a natural language processing service that can extract key phrases, places, names, organizations, events, and even sentiment from unstructured text, and more. Customers usually want to add their own entity types unique to their business, like proprietary part codes or industry-specific terms. In November 2018, enhancements to Amazon Comprehend added the ability to […]
Power contextual bandits using continual learning with Amazon SageMaker RL
Amazon SageMaker is a modular, fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Training models is quick and easy using a set of built-in high-performance algorithms, pre-built deep learning frameworks, or using your own framework. To help select your machine learning (ML) algorithm, […]