Overview
QnABot on AWS is a generative artificial intelligence (AI)-enabled multi-channel, multi-language conversational interface (or chatbot) that responds to your customer’s questions, answers, and feedback. It allows you to deploy a fully functional chatbot across multiple channels including chat, voice, SMS, and Amazon Alexa.
Benefits
Provide personalized tutorials and question and answer support with intelligent multi-part interaction. Easily import and export questions from your QnABot setup.
Use Amazon Kendra natural language processing (NLP) capabilities to better understand human questions. Build conversational applications using Amazon Bedrock, a managed service offering high-performance foundation models.
Automate customer support workflows. Realize cost savings and serve your customers better so they can get accurate answers and help quickly.
Utilize intent and slot matching for diverse Q&A workflows. Leverage natural language understanding, context management, and multi-turn dialogues through large language models (LLMs) and retrieval augmented generation (RAG).
Technical details
You can automatically deploy this architecture using the implementation guide and the appropriate AWS CloudFormation template. If you want to deploy using VPC, first deploy a VPC with two private and two public subnets spread over two Availability Zones, and then use the QnABot VPC AWS CloudFormation template. Otherwise, use the QnABot Main AWS CloudFormation template.
Step 1
The admin deploys the solution into their AWS account, opens the content designer user interface (UI) or Amazon Lex web client, and uses Amazon Cognito to authenticate.
Step 2
After authentication, Amazon API Gateway and Amazon Simple Storage Service (Amazon S3) deliver the contents of the content designer UI.
Step 3
The admin configures questions and answers in the content designer and the UI sends requests to API Gateway to save the questions and answers.
Step 4
The Content designer AWS Lambda function saves the input in Amazon OpenSearch Service in a question bank index. If using text embeddings, these requests pass through LLMs hosted on Amazon SageMaker or Amazon Bedrock to generate embeddings before being saved into the question bank on OpenSearch.
Step 5
Chatbot users interact with Amazon Lex through the web client UI, Amazon Alexa, or Amazon Connect.
Step 6
Amazon Lex forwards requests to the Bot fulfillment Lambda function. Users can also send requests to this Lambda function using Amazon Alexa devices.
Step 7
The user and chat information is stored in Amazon DynamoDB to disambiguate follow up questions from previous question and answer context.
Step 8
The Bot fulfillment Lambda function takes the user’s input and uses Amazon Comprehend and Amazon Translate (if necessary) to translate non-native language requests to the native language selected during the deployment, and then looks up the answer in OpenSearch Service.
If using LLM features such as text generation and text embeddings, these requests first pass through various LLMs hosted on SageMaker or Amazon Bedrock to generate the search query and embeddings and compare with those saved in the question bank on OpenSearch.
Step 9
If no match is returned from the OpenSearch question bank, then the Bot fulfillment Lambda function forwards the request as follows:
Step 9A
If an Amazon Kendra index is configured for fallback, then the Bot fulfillment Lambda function forwards the request to Amazon Kendra if no match is returned from the OpenSearch question bank. The text generation LLM can optionally be used to create the search query and to synthesize a response from the returned document excerpts.
Step 9B
If a Knowledge Base for Amazon Bedrock ID is configured, then the Bot fulfillment Lambda function forwards the request to the Knowledge Base for Amazon Bedrock. The Bot Fulfillment Lambda function leverages the RetrieveAndGenerate API Gateway API to fetch the relevant results for a user query, augment the foundational model's prompt, and return the response.
Step 10
User interactions with the Bot fulfillment Lambda function generate logs and metrics data, which is sent to Amazon Data Firehose and then to Amazon S3 for later data analysis.
Step 11
The OpenSearch Dashboards can be used to view usage history, logged utterances, no hits utterances, positive user feedback, and negative user feedback, and also provides the ability to create custom reports.
Related content
- Publish Date