AWS Machine Learning Blog
Category: Amazon SageMaker Ground Truth
Build an end-to-end MLOps pipeline for visual quality inspection at the edge – Part 1
A successful deployment of a machine learning (ML) model in a production environment heavily relies on an end-to-end ML pipeline. Although developing such a pipeline can be challenging, it becomes even more complex when dealing with an edge ML use case. Machine learning at the edge is a concept that brings the capability of running […]
Build an end-to-end MLOps pipeline for visual quality inspection at the edge – Part 2
In Part 1 of this series, we drafted an architecture for an end-to-end MLOps pipeline for a visual quality inspection use case at the edge. It is architected to automate the entire machine learning (ML) process, from data labeling to model training and deployment at the edge. The focus on managed and serverless services reduces […]
Build an end-to-end MLOps pipeline for visual quality inspection at the edge – Part 3
This is Part 3 of our series where we design and implement an MLOps pipeline for visual quality inspection at the edge. In this post, we focus on how to automate the edge deployment part of the end-to-end MLOps pipeline. We show you how to use AWS IoT Greengrass to manage model inference at the […]
Improving your LLMs with RLHF on Amazon SageMaker
In this blog post, we illustrate how RLHF can be performed on Amazon SageMaker by conducting an experiment with the popular, open-sourced RLHF repo Trlx. Through our experiment, we demonstrate how RLHF can be used to increase the helpfulness or harmlessness of a large language model using the publicly available Helpfulness and Harmlessness (HH) dataset provided by Anthropic. Using this dataset, we conduct our experiment with Amazon SageMaker Studio notebook that is running on an ml.p4d.24xlarge instance. Finally, we provide a Jupyter notebook to replicate our experiments.
How United Airlines built a cost-efficient Optical Character Recognition active learning pipeline
In this post, we discuss how United Airlines, in collaboration with the Amazon Machine Learning Solutions Lab, build an active learning framework on AWS to automate the processing of passenger documents. “In order to deliver the best flying experience for our passengers and make our internal business process as efficient as possible, we have developed […]
Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS
In computer vision (CV), adding tags to identify objects of interest or bounding boxes to locate the objects is called labeling. It’s one of the prerequisite tasks to prepare training data to train a deep learning model. Hundreds of thousands of work hours are spent generating high-quality labels from images and videos for various CV […]
Implement a multi-object tracking solution on a custom dataset with Amazon SageMaker
The demand for multi-object tracking (MOT) in video analysis has increased significantly in many industries, such as live sports, manufacturing, and traffic monitoring. For example, in live sports, MOT can track soccer players in real time to analyze physical performance such as real-time speed and moving distance. Since its introduction in 2021, ByteTrack remains to […]
High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus
Amazon SageMaker Ground Truth Plus helps you prepare high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements, and Ground Truth Plus sets up and manages your data labeling workflow based on these requirements. From […]
Create high-quality datasets with Amazon SageMaker Ground Truth and FiftyOne
This is a joint post co-written by AWS and Voxel51. Voxel51 is the company behind FiftyOne, the open-source toolkit for building high-quality datasets and computer vision models. A retail company is building a mobile app to help customers buy clothes. To create this app, they need a high-quality dataset containing clothing images, labeled with different […]
Improve multi-hop reasoning in LLMs by learning from rich human feedback
Recent large language models (LLMs) have enabled tremendous progress in natural language understanding. However, they are prone to generating confident but nonsensical explanations, which poses a significant obstacle to establishing trust with users. In this post, we show how to incorporate human feedback on the incorrect reasoning chains for multi-hop reasoning to improve performance on […]