Overview

QnABot on AWS is a multi-channel, multi-language conversational interface (chatbot) that responds to your customer’s questions, answers, and feedback. It allows you to deploy a fully functional chatbot across multiple channels including chat, voice, SMS, and Amazon Alexa.
Benefits

Provide personalized tutorials and question and answer support with intelligent multi-part interaction. Use the Command Line Interface (CLI) to import and export questions from your QnABot setup. Use Amazon Kendra natural language processing (NLP) capabilities to better understand human questions. Import questions and answers and session attributes from an Excel file.
Automate customer support workflows.
Create engaging, human-like interactions for chatbots. Use intent and slot matching to implement different types of question-and-answer workflows.
Technical details

This architecture shows how the QnABot on AWS components are deployed with an AWS CloudFormation template.
Step 1
Deploy this AWS Solution into your AWS account. Open the Content Designer user interface (UI) or the Amazon Lex web client, and use Amazon Cognito to authenticate.
Step 2
After authentication, Amazon API Gateway and Amazon Simple Storage Service (Amazon S3) deliver the contents of the Content Designer UI.
Step 3
Configure questions and answers in the Content Designer. The UI sends requests to Amazon API Gateway to save the questions and answers.
Step 4
The Content Designer AWS Lambda function saves the input in Amazon OpenSearch Service in a question's bank index. If using text embeddings, these request pass through a machine learning (ML) model, hosted on Amazon SageMaker, to generate embeddings before being saved into the question bank on OpenSearch.
Step 5
Chatbot users interact with Amazon Lex through the web client UI or Amazon Connect.
Step 6
Amazon Lex forwards requests to the Bot Fulfillment Lambda function. Chatbot users can also send requests to this Lambda function through Amazon Alexa devices.
Step 7
The Bot Fulfillment Lambda function takes the user's input and uses Amazon Comprehend and Amazon Translate (if necessary) to translate non-English requests to English, and then looks up the answer in OpenSearch. If using large language model (LLM) features, such as text generation and text embeddings, these requests first pass through various ML models hosted on SageMaker. SageMaker generates the search query and embeddings to compare with those saved in the question bank on OpenSearch.
Step 8
If an Amazon Kendra index is configured for fallback, the Bot Fulfillment Lambda function forwards the request to Amazon Kendra if no matches were returned from the OpenSearch question bank. The text generation LLM can be used to create the search query, and to synthesize a response from the excerpts of the returned document.
Step 9
User interactions with the Bot Fulfillment function generate logs and metrics data, which are sent to Amazon Kinesis Data Firehose, then to Amazon S3 for later data analysis.