Guidance for Custom Search of an Enterprise Knowledge Base with Amazon OpenSearch Service
Overview
Please note: [Disclaimer]
How it works
These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.
Deploy with confidence
Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
All the services used in this Guidance, such as Lambda and API Gateway, provide Amazon CloudWatch metrics that can be used to monitor individual components of the Guidance. API Gateway and Lambda allow for publishing of new versions through an automated pipeline. CloudWatch is available for Amazon Connect, Amazon Lex, and Amazon Kendra, enabling monitoring, metric collection, and performance analysis for these services.
Security
AWS Identity and Access Management (IAM) is used in this Guidance to control access to resources and data. API Gateway helps with security by providing a protection layer for invoking category services through an outbound API. It acts as a gateway, or a proxy, between the client and the backend services, allowing you to control access and implement security measures.
Reliability
The services used in this Guidance are Lambda, DynamoDB, Amazon S3, and SageMaker. These services provide high availability within a Region, and allow deployment of highly available SageMaker endpoints. We use these services to implement a reliable application-level architecture by ensuring loosely coupled dependencies, handling throttling and retry limits, and maintaining stateless compute capabilities.
Performance Efficiency
This Guidance requires near real-time inference and high concurrency. Lambda, DynamoDB, and API Gateway are designed to meet this criteria. Also, we use SageMaker to host the LLM as an endpoint. Amazon Kendra and OpenSearch are ideal services for the concept of RAG. RAG combines retrieval-based models and language generation to improve generated text. Amazon Kendra and OpenSearch are utilized for efficient knowledge retrieval. This architecture enables the system to leverage retrieved information for more accurate and contextually relevant text generation.
Cost Optimization
This Guidance uses Lambda to design all compute components of search and question and answer, keeping billing to pay per millisecond. The data store is designed using DynamoDB and Amazon S3, providing a low total cost of ownership for storing and retrieving data. The Guidance also uses API Gateway, which reduces API development time and helps you make sure you only pay when an API is invoked.
Sustainability
This Guidance uses the scaling behaviors of Lambda, a SageMaker inference endpoint, and API Gateway to reduce over-provisioning resources. The serverless services, such as Lambda and API Gateway, are invoked only when there is a user query. It uses AWS Managed Services (AMS) to maximize resource utilization, and to reduce the amount of energy needed to run a given workload. Amplify, Amazon Connect, and Amazon Lex leverage auto-scaling capabilities to continually match the load and allocate resources accordingly. By dynamically adjusting resource levels based on demand, these services ensure that only the minimum necessary resources are utilized, optimizing efficiency and cost-effectiveness.
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages