AWS Machine Learning Blog

Tag: Generative AI

End to end architecture of a domain aware data processing pipeline for insurance documents

Build a domain‐aware data preprocessing pipeline: A multi‐agent collaboration approach

In this post, we introduce a multi-agent collaboration pipeline for processing unstructured insurance data using Amazon Bedrock, featuring specialized agents for classification, conversion, and metadata extraction. We demonstrate how this domain-aware approach transforms diverse data formats like claims documents, videos, and audio files into metadata-rich outputs that enable fraud detection, customer 360-degree views, and advanced analytics.

Automating complex document processing: How Onity Group built an intelligent solution using Amazon Bedrock

In this post, we explore how Onity Group, a financial services company specializing in mortgage servicing and origination, transformed their document processing capabilities using Amazon Bedrock and other AWS services. The solution helped Onity achieve a 50% reduction in document extraction costs while improving overall accuracy by 20% compared to their previous OCR and AI/ML solution.

Cost-effective AI image generation with PixArt-Sigma inference on AWS Trainium and AWS Inferentia

This post is the first in a series where we will run multiple diffusion transformers on Trainium and Inferentia-powered instances. In this post, we show how you can deploy PixArt-Sigma to Trainium and Inferentia-powered instances.

LLM evaluation

How Hexagon built an AI assistant using AWS generative AI services

Recognizing the transformative benefits of generative AI for enterprises, we at Hexagon’s Asset Lifecycle Intelligence division sought to enhance how users interact with our Enterprise Asset Management (EAM) products. Understanding these advantages, we partnered with AWS to embark on a journey to develop HxGN Alix, an AI-powered digital worker using AWS generative AI services. This blog post explores the strategy, development, and implementation of HxGN Alix, demonstrating how a tailored AI solution can drive efficiency and enhance user satisfaction.

WordFinder app: Harnessing generative AI on AWS for aphasia communication

In this post, we showcase how Dr. Kori Ramajoo, Dr. Sonia Brownsett, Prof. David Copland, from QARC, and Scott Harding, a person living with aphasia, used AWS services to develop WordFinder, a mobile, cloud-based solution that helps individuals with aphasia increase their independence through the use of AWS generative AI technology.

Get faster and actionable AWS Trusted Advisor insights to make data-driven decisions using Amazon Q Business

In this post, we show how to create an application using Amazon Q Business with Jira integration that used a dataset containing a Trusted Advisor detailed report. This solution demonstrates how to use new generative AI services like Amazon Q Business to get data insights faster and make them actionable.

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

In this post, we explore how AWS services can be seamlessly integrated with open source tools to help establish a robust red teaming mechanism within your organization. Specifically, we discuss Data Reply’s red teaming solution, a comprehensive blueprint to enhance AI safety and responsible AI practices.

Our solution consists data preparation for tool use, finetuning with prepared dataset, hosting the finetuned model and evaluation of the finetuned model

Customize Amazon Nova models to improve tool usage

In this post, we demonstrate model customization (fine-tuning) for tool use with Amazon Nova. We first introduce a tool usage use case, and gave details about the dataset. We walk through the details of Amazon Nova specific data formatting and showed how to do tool calling through the Converse and Invoke APIs in Amazon Bedrock. After getting the baseline results from Amazon Nova models, we explain in detail the fine-tuning process, hosting fine-tuned models with provisioned throughput, and using the fine-tuned Amazon Nova models for inference.

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker

In this post, we discuss how you can build an AI-powered document processing platform with open source NER and LLMs on SageMaker.

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

This post demonstrates how to deploy and serve the Mixtral 8x7B language model on AWS Inferentia2 instances for cost-effective, high-performance inference. We’ll walk through model compilation using Hugging Face Optimum Neuron, which provides a set of tools enabling straightforward model loading, training, and inference, and the Text Generation Inference (TGI) Container, which has the toolkit for deploying and serving LLMs with Hugging Face.