Posted On: Jul 26, 2023
Amazon OpenSearch Service now offers a simple, scalable, and high-performing vector engine for Amazon OpenSearch Serverless. Your developers can use this vector engine to build machine learning (ML)–augmented search experiences and generative artificial intelligence (AI) applications without having to manage the vector database infrastructure. You can rely on the vector engine for a cost-efficient, secure serverless environment, which will help your developers seamlessly transition from application prototyping to production.
Use the vector engine to generate contextually relevant responses in milliseconds. You can do this by querying billions of vector embeddings (numerical representations of your data) and combining them with text-based keywords in a single hybrid search request. You can easily add, update, and delete vector embeddings in near real time without having to re-index the data or worry about impacting the query performance, powering your generative AI applications in a scalable and efficient way. The vector engine is compatible with OpenSearch clients and open-source tools like LangChain, so you can use a broad set of technologies to build generative AI applications.
The vector engine is powered by the k-nearest neighbor (k-NN) search feature in the open-source OpenSearch Project and is now available in preview in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).
To get started, see the following list of resources:
- Amazon OpenSearch Serverless in the Amazon OpenSearch Service Developer Guide
- OpenSearch Project on GitHub
- Amazon OpenSearch Service overview
- Amazon OpenSearch Serverless overview