[SEO Subhead]
This Guidance shows how you can improve operational efficiency in the travel & hospitality (T&H) industry by integrating generative artificial intelligence (AI) into customer service applications. Amazon Bedrock, a managed service to build generative AI applications, offers foundation models (FMs) to power chatbots that understand natural language queries and provide instant, contextual responses. Generative AI technology, such as large language models (LLMs), help your chatbots understand common inquiries and offer personalized assistance, freeing your support agents to focus on complex issues.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
T&H companies index FAQs, data, and documents stored on Amazon Simple Storage Service (Amazon S3) using Amazon Kendra, Amazon OpenSearch Service, or Amazon Bedrock knowledge base to build vector knowledge stores for enterprise data.
Step 2
Use Amazon Cognito or your custom authentication service to authenticate the guest or traveler. Initiate the conversation using the chat user interface (UI) from your existing web and mobile app.
Step 3
Amazon Lex identifies and understands the intent of the guest or traveler query to process it further.
Step 4
If intent is available, Amazon Lex invokes an AWS Lambda function to process the guest or traveler intent.
Step 5
If intent is not identifiable, Amazon Lex calls pre-built QnAIntent, which leverages FMs offered on Amazon Bedrock to answer guest or traveler queries from the enterprise knowledge store using a retrieval augmented generation (RAG) workflow.
Step 6
Amazon Bedrock FMs return the response from the knowledge store in natural language, and Amazon Lex serves it to the guest or traveler through the chat UI.
Step 7
If the guest or traveler asks for a customer support representative at any point, then Amazon Lex identifies the intent and passes the conversation to a human agent using an Amazon Connect chat.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Amazon CloudWatch proactively monitors your applications and infrastructure, tracks key metrics, and sets alarms. This service can help you identify anomalies, rapidly troubleshoot issues, and ensure your architecture operates at peak efficiency.
-
Security
Amazon Cognito provides secure and simplified management of user identities and access for web and mobile applications. This minimizes the need for custom identity management code and enhances security through multi-factor authentication.
-
Reliability
OpenSearch Service is a managed service that automates software patching, failure detection, and failover procedures. This service allows you to automatically detect and replace failed nodes, reducing the overhead associated with self-managed clusters and improving data durability for mission-critical workloads.
-
Performance Efficiency
Amazon Kendra provides highly accurate and efficient natural language search capabilities. It uses machine learning (ML) optimizations to deliver fast and relevant search results on your data. Through high-performance natural language search, Amazon Kendra provides relevant results quickly and handles larges search volumes at scale.
-
Cost Optimization
Through Amazon Bedrock, you gain access to fully managed generative AI FMs through a simple API. This helps you optimize costs, as you pay only for the tokens used during inference and avoid the overhead of maintaining the underlying infrastructure.
-
Sustainability
Lambda, a serverless compute service, contributes to a more sustainable architecture by automatically scaling to meet demand and reusing execution environments. Using Lambda, you can optimize resource utilization and minimize energy consumption.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.