AWS announces the general availability of vector search for Amazon MemoryDB

Posted on: Jul 10, 2024

Vector search for Amazon MemoryDB, an in-memory database with multi-AZ durability, is now generally available. This capability helps you to store, index, retrieve, and search vectors. Amazon MemoryDB delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. Vector search for MemoryDB supports storing millions of vectors with single-digit millisecond query and update latencies at the highest levels of throughput with >99% recall. You can generate vector embeddings using AI/ML services, such as Amazon Bedrock and Amazon SageMaker, and store them within MemoryDB. 

With vector search for MemoryDB, you can develop real-time machine learning (ML) and generative AI applications that require the highest throughput at the highest recall rates with the lowest latency using the MemoryDB API or orchestration frameworks such as LangChain. For example, a bank can use vector search for MemoryDB to detect anomalies, such as fraudulent transactions during periods of high transactional volumes, with minimal false positives. 

Vector search for MemoryDB is available in all AWS Regions that MemoryDB is available—at no additional cost. 

To get started, create a new MemoryDB cluster using MemoryDB version 7.1 and enable vector search through the AWS Management Console or AWS Command Line Interface (CLI). To learn more, check out the vector search for MemoryDB documentation.