We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.
If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by selecting Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
Your privacy choices
We display ads relevant to your interests on AWS sites and on other properties, including cross-context behavioral advertising. Cross-context behavioral advertising uses data from one site or app to advertise to you on a different company’s site or app.
To not allow AWS cross-context behavioral advertising based on cookies or similar technologies, select “Don't allow” and “Save privacy choices” below, or visit an AWS site with a legally-recognized decline signal enabled, such as the Global Privacy Control. If you delete your cookies or visit this site from a different browser or device, you will need to make your selection again. For more information about cookies and how we use them, please read our AWS Cookie Notice.
Konten ini tidak tersedia dalam bahasa yang dipilih. Kami terus berusaha menyediakan konten kami dalam bahasa yang dipilih. Terima kasih atas pengertian Anda.
QnABot on AWS is a generative artificial intelligence (AI) solution that responds to customer inquiries across multiple languages and platforms, enabling conversations through chat, voice, SMS, and Amazon Alexa. This versatile assistant helps organizations improve customer service through instant, consistent responses across a variety of communication channels with no coding required.
Benefits
Enhance your customer’s experience
Provide personalized tutorials and question and answer support with intelligent multi-part interaction. Easily import and export questions from your QnABot setup.
Leverage natural language and semantic understanding
Use Amazon Kendra natural language processing (NLP) capabilities to better understand human questions. Build conversational applications using Amazon Bedrock, a managed service offering high-performance foundation models.
Reduce customer support wait times
Automate customer support workflows. Realize cost savings and serve your customers better so they can get accurate answers and help quickly.
Implement the latest generative AI technology
Utilize intent and slot matching for diverse Q&A workflows. Leverage natural language understanding, context management, and multi-turn dialogues through large language models (LLMs) and retrieval augmented generation (RAG).
Step 9A If pre-processing guardrails are enabled, they scan and block potentially harmful user inputs before they reach the QnABot application. This acts as the first line of defense to prevent malicious or inappropriate queries from being processed.
Step 9B If using Amazon Bedrock guardrails for LLMs or Amazon Bedrock Knowledge Bases, it can apply contextual guarding and safety controls during LLM inference to ensure appropriate answer generation.
Step 9C If post-processing guardrails are enabled, they scan, mask, or block potentially harmful content in the final responses before they are sent to the client through the fulfillment Lambda. This serves as the last line of defense to ensure that sensitive information, such as personally identifiable information (PII) is properly masked and inappropriate content is blocked.
Step 10 If no match is returned from the OpenSearch Service question bank or text passages, the Bot fulfillment Lambda function forwards the request as follows:
Step 10A If an Amazon Kendra index is configured for fallback, then the Bot Fulfillment Lambda function forwards the request to Amazon Kendra if no match is returned from the OpenSearch Service question bank. Optionally, the text generation LLM can be used to create the search query and to synthesize a response from the returned document excerpts.
Step 10B If an Amazon Bedrock Knowledge Base ID is configured, the Bot FulfillmentLambda function forwards the request to the Amazon Bedrock Knowledge Base. The Bot Fulfillment Lambda function then uses the RetrieveAndGenerate or RetrieveAndGenerateStream APIs to fetch the relevant results for the user's query, augment the foundational model's prompt, and return the response.
Step 11 When streaming is enabled, the LLM responses from text passages or external data sources are enhanced by Retrieval-Augmented Generation (RAG). Responses are streamed through WebSocket connections using the same Amazon Lex sessionId, while the final response is processed through the fulfillment Lambda function.
Step 12 User interactions with the Bot Fulfillment function are logged, and the resulting metrics data is sent to Amazon Data Firehose, then forwarded to Amazon S3 for later data analysis.
Step 13 The OpenSearch Dashboards can be used to view a variety of analytics, including usage history, logged utterances, no-hit utterances, positive user feedback, negative user feedback, and the ability to create custom reports.
Step 14 Using Amazon CloudWatch, the admins can monitor service logs and use the CloudWatch dashboard created by QnABot to monitor deployment’s operational health.
Step 1 The admin deploys the solution into their AWS account, opens the content designer user interface (UI) or Amazon Lex web client, and uses Amazon Cognito to authenticate.
Step 3 The admin configures questions and answers in the content designer and the UI sends requests to API Gateway to save the questions and answers.
Step 4 The Content designerAWS Lambda function saves the input in Amazon OpenSearch Service in a question bank index. If using text embeddings, these requests pass through LLMs hosted on Amazon SageMaker or Amazon Bedrock to generate embeddings before being saved into the question bank on the OpenSearch Service.
In addition, the Content designer saves the default and custom configuration settings in the Amazon Dynamo DB.
Step 5 Users of the assistant interact with Amazon Lex through the web client UI, Amazon Alexa, or Amazon Connect.
Step 6 Amazon Lex forwards requests to the Bot Fulfillment Lambda function. Users can also send requests to this Lambda function through Amazon Alexa devices.
NOTE: When streaming is enabled, the chat client uses the Amazon Lex session identifier (sessionId) to establish WebSocket connections through API Gateway V2.
Step 7 The user and chat information are stored in DynamoDB to disambiguate follow up questions from the previous question and answer context.
Step 8 The Bot fulfillment Lambda function uses Amazon Comprehend and, if necessary, Amazon Translate to translate non-native language requests into the native language selected by the user during deployment. The function then queries the OpenSearch Service to retrieve the appropriate answer.
Step 9 If using large language model (LLM) capabilities such as text generation and text embeddings, these requests will first pass through various foundational models hosted on Amazon Bedrock. This will generate the search query and embeddings, which will then be compared against those saved in the question bank on the OpenSearch Service.
Step 9A If pre-processing guardrails are enabled, they scan and block potentially harmful user inputs before they reach the QnABot application. This acts as the first line of defense to prevent malicious or inappropriate queries from being processed.
Step 9B If using Amazon Bedrock guardrails for LLMs or Amazon Bedrock Knowledge Bases, it can apply contextual guarding and safety controls during LLM inference to ensure appropriate answer generation.
Step 9C If post-processing guardrails are enabled, they scan, mask, or block potentially harmful content in the final responses before they are sent to the client through the fulfillment Lambda. This serves as the last line of defense to ensure that sensitive information, such as personally identifiable information (PII) is properly masked and inappropriate content is blocked.
Step 10 If no match is returned from the OpenSearch Service question bank or text passages, the Bot fulfillment Lambda function forwards the request as follows:
Step 10A If an Amazon Kendra index is configured for fallback, then the Bot Fulfillment Lambda function forwards the request to Amazon Kendra if no match is returned from the OpenSearch Service question bank. Optionally, the text generation LLM can be used to create the search query and to synthesize a response from the returned document excerpts.
Step 10B If an Amazon Bedrock Knowledge Base ID is configured, the Bot FulfillmentLambda function forwards the request to the Amazon Bedrock Knowledge Base. The Bot Fulfillment Lambda function then uses the RetrieveAndGenerate or RetrieveAndGenerateStream APIs to fetch the relevant results for the user's query, augment the foundational model's prompt, and return the response.
Step 11 When streaming is enabled, the LLM responses from text passages or external data sources are enhanced by Retrieval-Augmented Generation (RAG). Responses are streamed through WebSocket connections using the same Amazon Lex sessionId, while the final response is processed through the fulfillment Lambda function.
Step 12 User interactions with the Bot Fulfillment function are logged, and the resulting metrics data is sent to Amazon Data Firehose, then forwarded to Amazon S3 for later data analysis.
Step 13 The OpenSearch Dashboards can be used to view a variety of analytics, including usage history, logged utterances, no-hit utterances, positive user feedback, negative user feedback, and the ability to create custom reports.
Step 14 Using Amazon CloudWatch, the admins can monitor service logs and use the CloudWatch dashboard created by QnABot to monitor deployment’s operational health.
Step 1 The admin deploys the solution into their AWS account, opens the content designer user interface (UI) or Amazon Lex web client, and uses Amazon Cognito to authenticate.
Step 3 The admin configures questions and answers in the content designer and the UI sends requests to API Gateway to save the questions and answers.
Step 4 The Content designerAWS Lambda function saves the input in Amazon OpenSearch Service in a question bank index. If using text embeddings, these requests pass through LLMs hosted on Amazon SageMaker or Amazon Bedrock to generate embeddings before being saved into the question bank on the OpenSearch Service.
In addition, the Content designer saves the default and custom configuration settings in the Amazon Dynamo DB.
Step 5 Users of the assistant interact with Amazon Lex through the web client UI, Amazon Alexa, or Amazon Connect.
Step 6 Amazon Lex forwards requests to the Bot Fulfillment Lambda function. Users can also send requests to this Lambda function through Amazon Alexa devices.
NOTE: When streaming is enabled, the chat client uses the Amazon Lex session identifier (sessionId) to establish WebSocket connections through API Gateway V2.
Step 7 The user and chat information are stored in DynamoDB to disambiguate follow up questions from the previous question and answer context.
Step 8 The Bot fulfillment Lambda function uses Amazon Comprehend and, if necessary, Amazon Translate to translate non-native language requests into the native language selected by the user during deployment. The function then queries the OpenSearch Service to retrieve the appropriate answer.
Step 9 If using large language model (LLM) capabilities such as text generation and text embeddings, these requests will first pass through various foundational models hosted on Amazon Bedrock. This will generate the search query and embeddings, which will then be compared against those saved in the question bank on the OpenSearch Service.