Posted On: Sep 13, 2023

Today, we are announcing knowledge base for Amazon Bedrock in preview that lets you connect your organization’s private data sources to foundation models (FMs) to enable retrieval augmented generation in your generative AI applications to deliver more relevant and contextual FM responses. 

For use cases such as question answering on an organization’s private data, customers typically use a technique called retrieval augmented generation (RAG), which involves passing an end-user’s query to search across a customer’s internal data sources and retrieving relevant text. To retrieve semantically accurate information from search, customers first convert their data corpus into embeddings (or vectors) using a text-to-embeddings FM and store them in a vector database. Today, customers perform several undifferentiated steps to implement RAG. Knowledge base for Amazon Bedrock eliminates the need to integrate different systems. Developers can specify the location of their documents such as an Amazon S3 bucket, and Bedrock will manage both the ingestion workflow (fetching documents, chunking, creating embeddings, and storing them in a vector database) and runtime orchestration (creating embeddings for the end-user’s query, finding relevant chunks from the vector database, and passing them to an FM). Customers can choose from a range of vector databases including the vector engine for Amazon OpenSearch Serverless, Pinecone, and Redis Enterprise Cloud.

Knowledge base for Amazon Bedrock is currently available in preview to all customers who have access to agents for Amazon Bedrock. To learn more, see Knowledge Base for Amazon Bedrock blog post and the Amazon Bedrock product detail page.