Posted On: Dec 18, 2023

Amazon OpenSearch Service adds multimodal support on Neural Search for OpenSearch 2.11 deployments. This empowers builders to create and operationalize multimodal search applications with significantly reduced undifferentiated heavy-lifting. For years, customers have been building vector search applications on OpenSearch k-NN, but they’ve been burdened with building middleware to integrate text embedding models into search and ingest pipelines. OpenSearch builders can now power multimodal search through out-of-the-box integrations with Amazon Bedrock text and image multimodal APIs to power search pipelines that run on-cluster.

Multimodal support enables builders to run search queries via OpenSearch APIs using image, text or both. This empowers customers to find images by describing visual characteristics, using an image to discover other visually similar images, or using image and text pairs to match on both semantic and visual similarity.

This new feature is available in all AWS Regions that support OpenSearch 2.11+ on Amazon OpenSearch Service and Amazon Bedrock. To learn more about multimodal support for Neural Search, refer to the documentation.