This Guidance helps you build and manage an artificial intelligence (AI)-based conversational workflow using a single user interface (UI) through the Conversational Designer by NLX. NLX implemented an AWS Cloud-based architecture that offers human-like interactions through AI-driven voice and text conversations. Airlines and hotels can use this Guidance to help travelers book reservations, check in, confirm their flight status, and more—all through conversational AI. The reference architecture can integrate with existing third-party booking and passenger service system (PSS) systems.
Architecture Diagram
Step 1
Configure your conversation AI workflow with the Conversational Designer by NLX, which is built using Amazon Simple Storage Service (Amazon S3), Amazon API Gateway, Amazon DynamoDB, and AWS Lambda functions.
Amazon Kinesis performs real-time analytics, and Amazon Timestream stores timeseries data generated during conversations. Amazon Translate and Amazon Polly support conversational designs.
Step 2
Customer calls are directed to Amazon Connect, and the call progresses through a contact flow. Amazon Lex supports intelligent conversational chatbots to automate responses for a high volume of user contacts without compromising the customer’s experience.
Step 3
Customer requests are sent to NLX’s natural language understanding (NLU) engine, which is built using Amazon Elastic Container Service (Amazon ECS), Amazon DynamoDB, Amazon Translate, and Amazon ElastiCache.
The scalable engine is built to analyze input text and determine the meaning behind the customer request. It is scalable, which helps minimize response times and makes it resilient against failure.
Step 4
Use an API Gateway and Lambda integration to create HTTPS-based API requests to access the data from the systems of records, such as a booking engine, central reservation system, loyalty system, and PSS.
Step 5
Use AWS Identity and Access Management (IAM) and AWS Security Token Service (AWS STS) to create roles and temporary tokens that securely authorize access to various services.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
The reference architecture can be scripted using AWS CloudFormation, added to your own development pipeline, and deployed in your cloud environment. Use Amazon CloudWatch to increase your observability with application and service-level metrics, personalized dashboards, and logs.
-
Security
With AWS STS, you can give IAM users temporary, limited-privilege credentials, which helps you protect resources through restricted access. The permissions for each user are controlled through IAM roles.
All data is encrypted both in-transit and at rest. You can use customer-controlled AWS Key Management Service (AWS KMS) keys for encryption. Although the solution is serverless, the Lambda components can run within a customer’s virtual private cloud (VPC), accessing external services (such as Amazon Lex) only through a customer’s approved endpoints.
-
Reliability
The serverless components in this architecture enable fault-tolerance through automatic retries and high availability. These abilities are achieved by deploying the components in multiple Availability Zones.
-
Performance Efficiency
Because the serverless components of this architecture are scalable, the architecture can scale up to handle the concurrent processing of potentially thousands of calls or scale down when there are no pending calls to process.
-
Cost Optimization
When you use serverless services, you do not have to maintain servers. Instead of paying to run servers, you are only charged for the resources you consume, such as CPU or memory. Code only runs when the serverless application needs back-end functions, and the code automatically scales up as needed.
-
Sustainability
By using managed services and dynamic scaling, you can minimize the environmental impact of back-end services. Instances that run behind some of the services are powered by Graviton3 ARM processors, which use up to 60% less energy than comparable x86-based instances.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.