AWS Machine Learning Blog

Category: Thought Leadership

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 2

This post focuses on doing RAG on heterogeneous data formats. We first introduce routers, and how they can help managing diverse data sources. We then give tips on how to handle tabular data and will conclude with multimodal RAG, focusing specifically on solutions that handle both text and image data.

Multilingual content processing using Amazon Bedrock and Amazon A2I

This post outlines a custom multilingual document extraction and content assessment framework using a combination of Anthropic’s Claude 3 on Amazon Bedrock and Amazon A2I to incorporate human-in-the-loop capabilities.

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1

In this post, we cover the core concepts behind RAG architectures and discuss strategies for evaluating RAG performance, both quantitatively through metrics and qualitatively by analyzing individual outputs. We outline several practical tips for improving text retrieval, including using hybrid search techniques, enhancing context through data preprocessing, and rewriting queries for better relevance.

Best practices to build robust generative AI applications with Amazon Bedrock Agents – Part 1

Best practices for building robust generative AI applications with Amazon Bedrock Agents – Part 1

In this post, we show you how to create accurate and reliable agents. Agents helps you accelerate generative AI application development by orchestrating multistep tasks. Agents use the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps.

AWS recognized as a first-time Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms

AWS recognized as a first-time Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms

AWS has been recognized as a Leader in the 2024 Gartner Magic Quadrant for Data Science and Machine Learning Platforms. The post highlights how AWS’s continued innovations in services like Amazon Bedrock and Amazon SageMaker have enabled organizations to unlock the transformative potential of generative AI.

How healthcare payers and plans can empower members with generative AI

How healthcare payers and plans can empower members with generative AI

In this post, we discuss how generative artificial intelligence (AI) can help health insurance plan members get the information they need. The solution presented in this post not only enhances the member experience by providing a more intuitive and user-friendly interface, but also has the potential to reduce call volumes and operational costs for healthcare payers and plans.

Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEval

Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEval

In this post, we discuss best practices for working with Foundation Model Evaluations Library (FMEval) in ground truth curation and metric interpretation for evaluating question answering applications for factual knowledge and quality.

Elevate customer experience through an intelligent email automation solution using Amazon Bedrock

Elevate customer experience through an intelligent email automation solution using Amazon Bedrock

In this post, we show you how to use Amazon Bedrock to automate email responses to customer queries. With our solution, you can identify the intent of customer emails and send an automated response if the intent matches your existing knowledge base or data sources. If the intent doesn’t have a match, the email goes to the support team for a manual response.

Bedrock KB Reranking model architecture

Improve AI assistant response accuracy using Knowledge Bases for Amazon Bedrock and a reranking model

AI chatbots and virtual assistants have become increasingly popular in recent years thanks the breakthroughs of large language models (LLMs). Trained on a large volume of datasets, these models incorporate memory components in their architectural design, allowing them to understand and comprehend textual context. Most common use cases for chatbot assistants focus on a few […]