Amazon Lex features

Overview

Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy voice and text conversational interfaces in applications. Lex integrates with AWS Lambda, used to easily trigger functions for execution of your back-end business logic for data retrieval and updates. Once built, your bot can be deployed directly to contact centers, chat and text platforms, and IoT devices. Lex provides rich insights and pre-built dashboards to track metrics for your 

Image showcasing seamless functionality of an app, enhancing your browsing experience

How Lex works

Powered by the same technology as Alexa, Amazon Lex provides you with the tools to tackle challenging deep learning problems, such as speech recognition and language understanding, through an easy-to-use fully managed service. Amazon Lex integrates with AWS Lambda which you can use to easily trigger functions for execution of your back-end business logic for data retrieval and updates. Once built, your bot can be deployed directly to chat platforms, mobile clients, and IoT devices. You can also use the reports provided to track metrics for your bot. Amazon Lex provides a scalable, secure, easy to use, end-to-end solution to build, publish and monitor your bots.

Amazon Lex + Generative AI

Amazon Lex leverages the power of Generative AI and Large Langue Models (LLMs) to enhance the builder and customer experience. As the demand for conversational AI continues to grow, developers are seeking ways to enhance their chatbot with human-like interactions. Large language models can be highly useful in this regard by providing automated responses to frequently asked questions, analyzing customer sentiment and intents to route calls appropriately, generating summaries of conversations to help agents, and even automatically generating emails or chat responses to common customer inquiries. This new generation of AI-powered assistants provide seamless self-service experiences that delight customers.

Amazon Lex is committed to infusing Generative AI into all parts of the builder and end-user experiences to help increase containment while resolving increasingly complex use cases with confidence. Amazon Lex has launched the below features to empower developers and users alike: 
 

Generative AI

Builders can easily enable conversational answers to commonly asked customer questions by leveraging a new intent type that queries an authorized knowledge source and utilizes foundation models from Amazon Bedrock to provide an accurate response. This retrieval augmented generation (RAG) based solution provides a customized, conversational request and response framework that allows customers to get automated answers quickly and reliably, and improve self-service further.

This feature leverages large language models (LLM) to resolve slot values when the native NLU is unable to. Customers can provide free form responses to basic questions and the LLM’s enhanced reasoning capabilities will resolve the response to a specified format understood by the slot definitions. 

This Gen AI based feature allows bot developers to create a full bot based on a user’s prompt. Simply provide a description in natural language to generate a baseline bot which can be further refined. 

Builders can now automatically generate variations of sample utterances to improve intent classification accuracy with minimal effort. Using LLMs, we will provide sample utterances for training an intent based on the name, description, and existing utterances present in the intent. 

Natural conversations

Amazon Lex provides automatic speech recognition and natural language understanding technologies to create a Speech Language Understanding system. Amazon Lex is able to learn the multiple ways users can express their intent based on a few sample utterances provided by the bot builder. The speech language understanding system takes natural language speech and text input, understands the intent behind the input, and fulfills the user intent by invoking the appropriate response. 

As the conversation develops, being able to classify utterances accurately requires managing context across multi-turn conversations. Amazon Lex supports context management natively, so you can manage the context directly without the need for custom code. As initial prerequisite intents are filled, you can create “contexts” to invoke related intents. This simplifies bot design and expedites the creation of conversational experiences. With Amazon Lex, you can disambiguate based on conversation exchange. 

Amazon Lex bots provide the ability for multi-turn conversations. Once an intent has been identified, users will be prompted for information that is required for the intent to be fulfilled (for example, if “Book hotel” is the intent, the user is prompted for the location, check-in date, number of nights, etc.). Amazon Lex gives you an easy way to build multi-turn conversations for your chatbots. You simply list the slots/parameters you want to collect from your bot users, as well as the corresponding prompts, and Amazon Lex takes care of orchestrating the dialogue by prompting for the appropriate slot.

The Amazon Lex speech recognition engine has been trained on telephony audio (8 kHz sampling rate), providing increased speech recognition accuracy for telephony use-cases. When building a conversational bot with Amazon Lex, the 8 kHz support allows for higher fidelity with telephone speech interactions, such as through a contact center application or help desk. Lex also supports advanced capabilities like wait and continue, barge-in support, and intelligent pausing for more natural sounding conversations. 

Builder productivity

Visual Conversation Builder 

The Visual Conversation Builder in the Amazon Lex console is a drag-and-drop conversation builder that accelerates bot building. Simply connect conversation nodes and easily iterate and test conversation designs in a no-code environment. It empowers any user to quickly build sophisticated and natural automated interactions, view conversation intent at a glance, and get visual feedback as changes are made. 

Streaming conversations

Natural conversations are punctuated with pauses and interruptions. For example, a caller may ask to pause the conversation or hold the line while looking up the necessary information before answering a question to retrieve credit card details when providing bill payments. With streaming conversation APIs, you can pause a conversation and handle interruptions directly as you configure the bot. You can quickly enhance the conversational capability of virtual contact center agents or smart assistants.

Automated Chatbot Designer

Amazon Lex V2 offers an Automated Chatbot Designer that simplifies bot design by utilizing existing conversation transcripts. The designer analyzes the transcripts and proposes an initial bot design with intents and slot types. You can then customize the design by adding prompts, testing the bot, and deploying it.

Test Workbench 

Test workbench enables you to author and execute test sets to measure bot performance as you add new use cases and updates. After changes, you can ensure your bot meets the performance criteria by having Lex automatically generate audio and text based test sets from previous user interactions. Lex will then provide aggregated results and present detailed insights into speech transcription, intent matching, and slot resolution. 

Analytics 

Lex Analytics gives you access to prebuilt dashboards detailing key metrics such as the number of total conversations and intent recognition rates. Analytics will help you better understand where in the conversation people are failing and how users navigate across intents.

Powerful Lifecycle Management Capabilities

Amazon Lex lets you apply versioning to the Intents, Slot Types, and Bots that you create. Versioning and rollback mechanisms enables you to easily maintain code as you test and deploy in a multi-developer environment. You can create multiple aliases for each Amazon Lex bot and associate different versions to each such as “production,” “development,” and “test”. This allows you to continue making improvements and changes to the bot and release them as new versions under one alias. This removes the need to update all the clients when a new version of the bot is deployed.

One-click deployment to multiple platforms

Amazon Lex allows you to easily publish your bot to chat services directly from the Amazon Lex console, reducing multi-platform development efforts. Rich formatting capabilities provide an intuitive user experience tailored to chat platforms like Facebook Messenger, Slack, and Twilio SMS.

AWS service integrations

Amazon Bedrock is a service that makes foundation models from Amazon and leading AI startups available through an API, so you can choose from various models to power generative-AI capabilities. Amazon Lex leverages Bedrock to call upon these foundation models to improve the builder experience and the end user experience. 

Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. You can use Polly to respond to your users in speech interactions. In addition to Standard TTS voices, Amazon Polly offers Neural Text-to-Speech (NTTS) voices that deliver advanced improvements in speech quality through a new machine learning approach.

Amazon Lex natively supports integration with AWS Lambda for data retrieval, updates, and business logic execution. The serverless compute capacity allows effortless execution of business logic at scale while you focus on developing bots. From Lambda, you can use AWS Lambda to easily integrate with your existing enterprise applications and databases. You just write your integration code and AWS Lambda automatically runs your code when needed to send or retrieve data from any external system. You can also access various AWS services, such as Amazon DynamoDB for persisting conversation state and Amazon SNS for notifying end users. 

Customer service conversations often involve finding specific information to answer certain questions. Amazon Kendra provides you with a highly accurate and easy-to-use intelligent search service powered by machine learning. You can add a Kendra search intent to find the most accurate answers from unstructured documents and FAQs. You simply define the search index parameters in the intent as part of the bot definition to expand its informational capabilities.

Contact center integrations

Amazon Lex is natively integrated with Amazon Connect, AWS’ omnichannel cloud contact center enabling developers to build conversational bots that can handle customer queries over chat or phone. You can integrate Amazon Lex into any call center application using the APIs. Please visit Amazon Connect integration to learn more.

Genesys Cloud CX is a cloud contact center solution that unifies customers and agent experiences across multiple channels such as phone, text and chat. You can deploy your voice and text bots on the Genesys Cloud platform to enable self-service experiences and improve customer engagement. Please refer to Genesys Cloud integration for more information.

The Amazon Chime SDK is a set of real-time communications components that developers can use to quickly add audio calling, video calling, and screen sharing capabilities to their own web, mobile or telephony applications. Amazon Chime SDK integrates with Amazon Lex so you can easily enable conversational experiences powered by Amazon Lex in contact centers that use Session Initiation Protocol (SIP) for voice communication.

Amazon Lex is used by several AWS CCI partners, so you can seamlessly create self-service customer service virtual agents, informational bots or application bots. Amazon Lex partners include Infosys, Quantiphi, and Xapp.ai. To learn more, please visit AWS CCI and AWS CCI Partners page.