AWS Machine Learning Blog
Category: Learning Levels
Automate Q&A email responses with Amazon Bedrock Knowledge Bases
In this post, we illustrate automating the responses to email inquiries by using Amazon Bedrock Knowledge Bases and Amazon Simple Email Service (Amazon SES), both fully managed services. By linking user queries to relevant company domain information, Amazon Bedrock Knowledge Bases offers personalized responses.
Streamline RAG applications with intelligent metadata filtering using Amazon Bedrock
In this post, we explore an innovative approach that uses LLMs on Amazon Bedrock to intelligently extract metadata filters from natural language queries. By combining the capabilities of LLM function calling and Pydantic data models, you can dynamically extract metadata from user queries. This approach can also enhance the quality of retrieved information and responses generated by the RAG applications.
Customize small language models on AWS with automotive terminology
In this post, we guide you through the phases of customizing SLMs on AWS, with a specific focus on automotive terminology for diagnostics as a Q&A task. We begin with the data analysis phase and progress through the end-to-end process, covering fine-tuning, deployment, and evaluation. We compare a customized SLM with a general purpose LLM, using various metrics to assess vocabulary richness and overall accuracy.
Automate emails for task management using Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, and Amazon Bedrock Guardrails
In this post, we demonstrate how to create an automated email response solution using Amazon Bedrock and its features, including Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, and Amazon Bedrock Guardrails.
How MSD uses Amazon Bedrock to translate natural language into SQL for complex healthcare databases
MSD, a leading pharmaceutical company, collaborates with AWS to implement a powerful text-to-SQL generative AI solution using Amazon Bedrock and Anthropic’s Claude 3.5 Sonnet model. This approach streamlines data extraction from complex healthcare databases like DE-SynPUF, enabling analysts to generate SQL queries from natural language questions. The solution addresses challenges such as coded columns, non-intuitive names, and ambiguous queries, significantly reducing query time and democratizing data access.
Considerations for addressing the core dimensions of responsible AI for Amazon Bedrock applications
In this post, we introduce the core dimensions of responsible AI and explore considerations and strategies on how to address these dimensions for Amazon Bedrock applications.
From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 2
This post focuses on doing RAG on heterogeneous data formats. We first introduce routers, and how they can help managing diverse data sources. We then give tips on how to handle tabular data and will conclude with multimodal RAG, focusing specifically on solutions that handle both text and image data.
Cohere Embed multimodal embeddings model is now available on Amazon SageMaker JumpStart
The Cohere Embed multimodal embeddings model is now generally available on Amazon SageMaker JumpStart. This model is the newest Cohere Embed 3 model, which is now multimodal and capable of generating embeddings from both text and images, enabling enterprises to unlock real value from their vast amounts of data that exist in image form. In this post, we discuss the benefits and capabilities of this new model with some examples.
Centralize model governance with SageMaker Model Registry Resource Access Manager sharing
We recently announced the general availability of cross-account sharing of Amazon SageMaker Model Registry using AWS Resource Access Manager (AWS RAM), making it easier to securely share and discover machine learning (ML) models across your AWS accounts. In this post, we will show you how to use this new cross-account model sharing feature to build your own centralized model governance capability, which is often needed for centralized model approval, deployment, auditing, and monitoring workflows.
Simplify automotive damage processing with Amazon Bedrock and vector databases
This post explores a solution that uses the power of AWS generative AI capabilities like Amazon Bedrock and OpenSearch vector search to perform damage appraisals for insurers, repair shops, and fleet managers.