Posted On: Nov 28, 2023
Now generally available, fully managed Knowledge Bases for Amazon Bedrock securely connects foundation models (FMs) to internal company data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, context-specific, and accurate responses. Knowledge bases extend the FM’s powerful capabilities to make it more knowledgeable about your business, customers, and offerings.
To equip the FM with up-to-date and proprietary information, organizations use Retrieval Augmented Generation (RAG), a technique that fetches data from company data sources and enriches the prompt to provide more relevant and accurate responses.
Implementing RAG requires organizations to perform several cumbersome steps to convert data into embeddings (vectors), store the embeddings in a specialized vector database, and build custom integrations into the database to search and retrieve text relevant to the user’s query. This can be time-consuming and inefficient.
Knowledge Bases for Amazon Bedrock is a fully managed RAG capability that allows you to customize FM responses with contextual and relevant company data. Simply point to the location of your data in Amazon S3, and Knowledge Bases for Amazon Bedrock takes care of the entire ingestion workflow into your vector database. If you do not have an existing vector database, Amazon Bedrock creates an Amazon OpenSearch Serverless vector store for you.
You can use the new Retrieve API to retrieve relevant results for a user query from knowledge bases. The new RetrieveAndGenerate API goes one step further by directly using the retrieved results to augment the FM prompt and returns the response.
Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including Amazon OpenSearch Serverless, Pinecone, and Redis Enterprise Cloud.
Knowledge Bases for Amazon Bedrock is available in the US East (N. Virginia) and US West (Oregon).
To learn more about using RAG with Amazon Bedrock, see Knowledge Bases for Amazon Bedrock.