Posted On: Nov 29, 2023

Amazon Titan Multimodal Embeddings helps customers power more accurate and contextually relevant multimodal search, recommendation, and personalization experiences for end users. You can now access the Amazon Titan Multimodal Embeddings foundation model in Amazon Bedrock.

Using Titan Multimodal Embeddings, you can generate embeddings for your content and store them in a vector database. When an end user submits any combination of text and image as a search query, the model generates embeddings for the search query and matches them to the stored embeddings to provide relevant search and recommendations results to end users. For example, a stock photography company with hundreds of millions of images can use the model to power its search functionality, so users can search for images using a phrase, image, or a combination of image and text. You can further customize the model to enhance its understanding of your unique content and provide more meaningful results using image-text pairs for fine-tuning. By default, the model generates vectors of 1,024 dimensions, which you can use to build search experiences that offer a high degree of accuracy and speed. You can also generate smaller dimensions to optimize for speed and performance.

The Amazon Titan Multimodal Embeddings foundation model in Amazon Bedrock is now available in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, read the AWS News launch blog, Amazon Titan product page, and documentation. To get started with Titan Multimodal Embeddings in Amazon Bedrock, visit the Amazon Bedrock console.