Skip to main content

Guidance for Personalized Experiences with NLX Conversational AI on AWS

Overview

This Guidance helps you build and manage an artificial intelligence (AI)-based conversational workflow using a single user interface (UI) through the Conversational Designer by NLX. NLX implemented an AWS Cloud-based architecture that offers human-like interactions through AI-driven voice and text conversations. Airlines and hotels can use this Guidance to help travelers book reservations, check in, confirm their flight status, and more—all through conversational AI. The reference architecture can integrate with existing third-party booking and passenger service system (PSS) systems. 

How it works

This reference architecture helps airlines and hotels deliver a secure, integrated experience for booking and managing travel.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

The reference architecture can be scripted using AWS CloudFormation , added to your own development pipeline, and deployed in your cloud environment. Use Amazon CloudWatch to increase your observability with application and service-level metrics, personalized dashboards, and logs.

Read the Operational Excellence whitepaper 

With AWS STS , you can give IAM users temporary, limited-privilege credentials, which helps you protect resources through restricted access. The permissions for each user are controlled through IAM roles.

All data is encrypted both in-transit and at rest. You can use customer-controlled AWS Key Management Service (AWS KMS) keys for encryption. Although the solution is serverless, the Lambda components can run within a customer’s virtual private cloud (VPC), accessing external services (such as Amazon Lex ) only through a customer’s approved endpoints.

Read the Security whitepaper 

The serverless components in this architecture enable fault-tolerance through automatic retries and high availability. These abilities are achieved by deploying the components in multiple Availability Zones. 

Read the Reliability whitepaper 

Because the serverless components of this architecture are scalable, the architecture can scale up to handle the concurrent processing of potentially thousands of calls or scale down when there are no pending calls to process.

Read the Performance Efficiency whitepaper 

When you use serverless services, you do not have to maintain servers. Instead of paying to run servers, you are only charged for the resources you consume, such as CPU or memory. Code only runs when the serverless application needs back-end functions, and the code automatically scales up as needed.

Read the Cost Optimization whitepaper 

By using managed services and dynamic scaling, you can minimize the environmental impact of back-end services. Instances that run behind some of the services are powered by Graviton3 ARM processors, which use up to 60% less energy than comparable x86-based instances.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.