Posted On: Nov 29, 2023
Today, AWS announces the general availability of vector engine for Amazon OpenSearch Serverless. Vector engine for OpenSearch Serverless is a simple, scalable, and high-performing vector database which makes it easier for developers to build machine learning (ML)–augmented search experiences and generative artificial intelligence (AI) applications without having to manage the underlying vector database infrastructure. Developers can rely on the vector engine's cost-efficient, secure, and mature serverless platform to seamlessly transition from application prototyping to production.
Generative AI models represent data as vector embeddings which are numerical representation of customers’ text, image, audio, or video data including its semantic meaning. Vector representations allow semantically similar data to be located in close proximity which enables vector engine to return contextually relevant results. With vector engine, developers can store, update, and search billions of vector embeddings with thousands of dimensions in milliseconds. It’s highly performant similarity search capability can be combined with lexical search and applications to deliver accurate and reliable results with consistent milliseconds response times. Vector engine is compatible with OpenSearch clients and open source tools like LangChain, so you can use various technologies to build generative AI applications.
The vector engine is now available in eight AWS Regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).
Vector engine for OpenSearch Serverless is powered by the k-nearest neighbor (k-NN) search feature in the open source OpenSearch Project. To get started, see the following list of resources:
- Vector Engine for Amazon OpenSearch Serverless
- Amazon OpenSearch Serverless in the Amazon OpenSearch Service Developer Guide
- OpenSearch Project