AWS Machine Learning Blog
Managing conversation flow with a fallback intent on Amazon Lex
Ever been stumped by a question? Imagine you’re in a business review going over weekly numbers and someone asks, “What about expenses?” Your response might be, “I don’t know. I wasn’t prepared to have that discussion right now.” Bots aren’t fortunate enough to have the same comprehension capabilities, so how should they respond when they […]
Generating searchable PDFs from scanned documents automatically with Amazon Textract
Amazon Textract is a machine learning service that makes it easy to extract text and data from virtually any document. Textract goes beyond simple optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables. This allows you to use Amazon Textract to instantly “read” virtually any type […]
Transcribe speech to text in real time using Amazon Transcribe with WebSocket
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to applications. In November 2018, we added streaming transcriptions over HTTP/2 to Amazon Transcribe. This enabled users to pass a live audio stream to our service and, in return, receive text transcripts in real time. We […]
Build your ML skills with AWS Machine Learning on Coursera
Machine learning (ML) is one of the fastest growing areas in technology and a highly sought after skillset in today’s job market. Today, I am excited to announce a new education course, built in collaboration with Coursera, to help you build your ML skills: Getting started with AWS Machine Learning. You can access the course […]
Build, test, and deploy your Amazon Sagemaker inference models to AWS Lambda
Amazon SageMaker is a fully managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models at any scale. When you deploy an ML model, Amazon SageMaker leverages ML hosting instances to host the model and provides an API endpoint to provide inferences. It may also […]
Multiregion serverless distributed training with AWS Batch and Amazon SageMaker
Creating a global footprint and access to scale are one of the many best practices at AWS. By creating architectures that take advantage of that scale and also efficient data utilization (in both performance and cost), you can start to see how important access is at scale. For example, within autonomous vehicles (AV) development, data is geographically […]
Building a deep neural net–based surrogate function for global optimization using PyTorch on Amazon SageMaker
July 2023: This post was reviewed for accuracy. Optimization is the process of finding the minimum (or maximum) of a function that depends on some inputs, called design variables. Customer X has the following problem: They are about to release a new car model to be designed for maximum fuel efficiency. In reality, thousands of […]
Launching TensorFlow distributed training easily with Horovod or Parameter Servers in Amazon SageMaker
Amazon SageMaker supports all the popular deep learning frameworks, including TensorFlow. Over 85% of TensorFlow projects in the cloud run on AWS. Many of these projects already run in Amazon SageMaker. This is due to the many conveniences Amazon SageMaker provides for TensorFlow model hosting and training, including fully managed distributed training with Horovod and […]
Performing batch inference with TensorFlow Serving in Amazon SageMaker
After you’ve trained and exported a TensorFlow model, you can use Amazon SageMaker to perform inferences using your model. You can either: Deploy your model to an endpoint to obtain real-time inferences from your model. Use batch transform to obtain inferences on an entire dataset stored in Amazon S3. In the case of batch transform, […]
Optimizing TensorFlow model serving with Kubernetes and Amazon Elastic Inference
This post offers a dive deep into how to use Amazon Elastic Inference with Amazon Elastic Kubernetes Service. When you combine Elastic Inference with EKS, you can run low-cost, scalable inference workloads with your preferred container orchestration system. Elastic Inference is an increasingly popular way to run low-cost inference workloads on AWS. It allows you […]