Posted On: Apr 30, 2024

Amazon Titan Text Embeddings V2, a new embeddings model in the Amazon Titan family of models, is now generally available in Amazon Bedrock. Using Titan Text Embeddings V2, customers can perform various natural language processing (NLP) tasks by representing text data as numerical vectors, known as embeddings. These embeddings capture the semantic and contextual relationships between words, phrases, or documents in a high-dimensional vector space. This model is optimized for Retrieval-Augmented Generations (RAG) use cases and is also well suited for a variety of other tasks such as information retrieval, question and answer chatbots, classification, and personalized recommendations.

Amazon Text Embeddings V2 is a light weight, efficient model ideal for high accuracy retrieval tasks at different dimensions. The model supports flexible embeddings sizes (256, 512, 1,024) and prioritizes accuracy maintenance at smaller dimension sizes, helping to reduce storage costs without compromising on accuracy. When reducing from 1,024 to 512 dimensions, Titan Text Embeddings V2 retains approximately 99% retrieval accuracy, and when reducing from 1,024 to 256 dimensions, the model maintains 97% accuracy. Additionally, Titan Text Embeddings V2 includes multilingual support for 100+ languages in pre-training as well as unit vector normalization for improving accuracy of measuring vector similarity. 

Amazon Titan Text Embeddings V2 is available in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, read the AWS News launch blog, Amazon Titan product page, and documentation. To get started with Titan Text Embeddings V2 in Amazon Bedrock, visit the Amazon Bedrock console.