AWS Contact Center

Enabling generative AI for better customer experience can be easy with Amazon Connect

Generative artificial intelligence (AI) was the hot topic for most of 2023 and has not slowed down in 2024, with new innovation and creative use cases for this remarkable technology coming forward every day. The promise of generative AI for customer experience (CX) use cases is clear and exciting. With any new exciting idea, however, there are considerations on how to use it responsibly and safely, and how to maximize the results. In the part one of our 3-part blog post series, we talked about how generative AI can immediately impact three major parts of a contact center’s operations: improve agent efficiency, improve analytics and quality monitoring, and improve customer self-service. In part two of the series, we dove deeper on those three areas, the challenges that can come with leveraging generative AI, and how you can approach them to mitigate risk while maximizing value. Now, in part 3, we will share how you can quickly and easily enable generative AI for customer service use cases.

Some of the questions we get asked most when discussing generative AI with our CX customers are, “How do I implement generative AI without having to train my own model?” and “How do I make sure my responses aren’t hallucinations or biased?” These are very appropriate concerns that need to be addressed and mitigated in any generative AI implementation. The inevitable follow-on concern is, “This sounds hard!” Let’s set the table a bit and talk about the different ways generative AI “gets done,” and then focus on the best part: Amazon Connect, the AI-powered contact center from Amazon Web Services (AWS), makes it easy!

The generative AI technology stack

There are three layers of the stack to consider when adopting generative AI. First comes infrastructure (the compute power to run the complex algorithms generative AI uses to create responses), then the building of the foundation models (FMs) and large language models (LLMs) themselves, and at the top, the applications that consume them. AWS offers solutions at each of the three layers of the generative AI stack.

Applications that leverage foundation models

The first deployment option is applications that are pre-built. Applications like Amazon Q and Amazon Q in Connect use pre-built models, which significantly reduces the time and effort required for development and deployment. They come ready-made with trained weights and configurations, allowing you to integrate them quickly into your workflows. Pre-built models from reputable sources often come with demonstrated performance and accuracy on benchmark datasets or real-world applications. This can provide confidence in their capabilities without needing to conduct extensive validation and testing from scratch.

Developing a high-quality generative AI model from scratch can be resource-intensive, requiring expertise, computational resources, and time. Using a pre-built model can be more cost-effective, especially for smaller teams or organizations without extensive resources for model development. Using a pre-built model can offer significant benefits in terms of time savings, cost efficiency, performance assurance, and ease of integration. This allows your team to focus more on developing application-specific features, user interfaces, or business logic rather than investing time in foundational model development and optimization. It is particularly advantageous for applications where rapid deployment and proven performance are paramount, while also providing access to advanced AI capabilities without the need for extensive internal development resources. However, pre-built models may not adequately address specific needs or adapt well to unique datasets or tasks, as their training is geared towards more “generic” use cases. Additionally, you need to be aware of what data you are sending to the model and whether it’s being used for further training.

Tools to build with foundation models and large language models

The next option is using a managed AI service like Amazon Bedrock. This can handle large-scale deployments and varying workloads, with AWS managing the infrastructure, updates, and scaling, allowing you to focus on training and customizing. If you’re curious about building applications with Amazon Bedrock, check out this post: Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation.

Infrastructure for foundation model training and inference

The last option is to build your own model using a service such as Amazon SageMaker, or a bespoke model using AWS infrastructure and something like PyTorch or TensorFlow. Building your own model allows you to tailor it precisely to your use case, whether it’s generating specific types of content, handling unique data formats, or incorporating specialized features. If your application requires knowledge or features that are specific to your industry or problem domain, a custom-built model can be more effective. This is particularly relevant in fields like healthcare, financial services, or manufacturing where specific regulations, data types, or performance requirements must be met. Services like SageMaker integrate well with other AWS services and third-party tools, facilitating a more comprehensive AI workflow. While this is a heavier lift than the pre-built models, it can be appropriate when you have some level of customization required that a pre-built model cannot meet. These also come at a potentially higher cost both in development and usage as you scale. Overall, using a managed AI service like SageMaker for generative AI allows you to focus more on developing and improving your models rather than managing infrastructure and operational tasks.

Responsible generative AI

When companies consider generative AI use cases, there are some terms that come up that cause no small amount of concern. Many stories about generative AI “hallucinations” and “machine learning bias” have worked there way through the media. So, what are they and what causes them? Hallucinations happen when the underlying model generates responses that sound plausible but are based on misinterpreted or nonexistent data. For example, if you ask a generative AI chatbot to give you five options for a right-handed gadget, but only three exist, it might make up data about two more to satisfy your request. It will sound reasonable, plausible, and have just enough data to make it look right…but it’s not. Generative AI foundation models get their bias from training data. All of the data used to train generative AI models has to come from somewhere, and by and large, it comes from people and the things we write. People, of course, aren’t perfect. We have biases, conscious and unconscious, and these biases pop up in those writings. Artificial intelligence accuracy models intermittently discriminate against particular demographic groups. This bias can manifest in different forms. The most common among them include:

  1. Biases that result from stereotypes. In this case, systems adjust to the existing perceptions and stereotypes that are present in the training data.
  2. Racial bias. This type is a subset of stereotypical generative AI biases, yet one of the most alarming ones. Analyzing the present situation and views on different races, algorithms may provide racially-biased content.
  3. Cultural bias. Another subset of stereotypical bias, this type, demonstrates unfair treatment and flawed outputs toward particular cultures and nationalities.

Okay, so these sound really scary, right? Fortunately, they are not that hard to mitigate, including for a customer experience use case. When thinking about how to reduce the probability of hallucination, make sure you AREN’T using open-ended prompts, like having it write poems or generate content. Reducing the possibility of bias is a matter of controlling the data from which the model gets its responses. By providing the model a comprehensive, monitored, and tested dataset, YOU control what the model can use as an output. Unlike a public generative AI model, in the managed model you have the ultimate authority over what it can ingest as a source of truth for generating the responses. Therefore, you have the power to reduce the undesired behavior. Additionally, all of the new generative AI features announced for Amazon Connect are powered by Amazon Bedrock, where AWS has implemented automated abuse detection.

Using built-in generative AI capabilities in Amazon Connect

Over the past six months, Amazon Connect launched several new generative AI features that make it easy for you to leverage this exciting new technology within Amazon Connect, without the overhead of deploying, managing, or training your own generative AI models. These new features focus on the three areas we talked about in Part 1 and Part 2 of this blog series, and can be quickly and easily turned on in any existing Amazon Connect deployment, without any specialized training or knowledge required.

Creating agent efficiencies

Let’s take a look at how generative AI fosters agent empowerment. One of the first reactions we saw when generative AI first hit the public consciousness was, “I want to replace my agents with this.” After a bit of soul-searching and understanding of capabilities and limitations of where the technology stands today, the better viewpoint is, “I can use this to make my agents better!” By enabling your agents to have fast, accurate, reliable information to relay to your customers, you can achieve new levels of agent efficiency and customer satisfaction. Average Handle Time can go down, and First Call Resolution can go up. Amazon Q in Connect, an evolution of Amazon Connect Wisdom, uses generative AI to deliver agents suggested responses and actions to address customer questions. Amazon Q in Connect leverages the real-time conversation with the customer, along with relevant company content, to automatically recommend what to say or what actions an agent should take to better assist customers, with quick access to relevant knowledge articles and documents. Agents can also use natural language to chat directly with Amazon Q to receive generated responses, recommended actions, and links to more information.

With Amazon Q in Connect, you tell it where to get the data it can use for responses, specifically your knowledge base (or bases), and thus you are in complete control of the data used by the model. This means you need to make sure your input data is accurate! You control your input data, knowledge base domains, and all the data it can use to provide responses to customer inquiries. Optimizing your content is key for making the most out of Amazon Q in Connect. If you’ve policed your knowledge base articles well, it can greatly reduce the chance of hallucinations or Amazon Q using untrustworthy sources and biased material. Setting up Amazon Q in Connect is easy, as well. With pre-built integrations to popular document stores like Zendesk, Salesforce SharePoint, and Amazon Simple Storage Service (S3), it’s as easy as creating a domain and pointing at your data store in the Amazon Connect Console.

Amazon Q in Connect does not need to be limited to knowledge base articles either. When using Amazon Connect Contact Lens, you can also have transcripts of every contact. Using evaluations, you can identify customer interactions that were a shining example of how to solve a problem. These transcripts can then be fed into the overall pool of resources that generative AI can draw upon to help solve customer issues. With Amazon Q in Connect, in addition to just having the best library of resources at your disposal, you can also have your best interactions as well, so every agent can have the lessons from your best agents helping them on every call.

Analytics and quality monitoring

We have always been good at collecting data from customer service interactions in contact centers. However, actually being able to get insights from the data has been historically challenging. In recent years, AI has helped us with transcription, categorization, and better “trend” visibility with existing data. Generative AI takes us another leap forward into what is actually going on in the contact center by being able to quickly, easily, and accurately summarize and categorize a contact. And, as with Amazon Q in Connect, it’s a managed, pre-trained model that’s set up in just a few clicks.

To improve customer interactions and make sure details are available for future reference, contact center managers often have rely on the notes that agents manually create after every customer interaction. These notes include details on how a customer issue was addressed, key moments of the conversation, and any pending follow-up items. While helpful, these notes take time to write and validate, often reducing agent availability and increasing customer hold times.

Amazon Connect Contact Lens now provides generative AI-powered post-contact summaries, and enables contact center managers to more efficiently monitor and help improve contact quality and agent performance. For example, you can use summaries to track commitments made to customers and monitor prompt completion of follow-up actions. Moments after a customer interaction, Contact Lens condenses the conversation into a concise and coherent summary. Supervisors can use these summaries to quickly assess the reason and outcomes of calls without having to read long transcripts or listen to lengthy recordings. Agents can also use the summaries for repeat contacts to quickly get a view of past interactions and necessary follow-up tasks.

Supervisors and quality analysts can also use the Ask AI option available in Amazon Connect agent evaluations to generate AI-powered responses to questions without having to listen or read the entire conversation. These questions can escape the rigid limits of yes/no responses like, “Was the customer satisfied?” and provide more insightful responses to questions such as, “How has this conversation potentially impacted the customer’s opinion of our company?” Contact Lens will review the conversation entirely and provide a response, with justification and references, reducing the time that the evaluator needs to spend with any given call. Of course, these responses can be overridden and revised as needed.

End-customer self-service

Generative AI often gets the most (and arguably misplaced) attention for end-customer self-service, but it certainly has some valuable uses. The misplacement comes from the belief that generative AI will be a replacement for all chatbots, and that it MUST be better. That’s not usually the case, although it sure can be in cases where ambiguity is a probability. Think about what a chatbot is doing in a particular use case… For example, if my chatbot is simply checking my account balance, generative AI is overkill for the simple task of repeating a data point. Where generative AI can help is by making these conversations more dynamic, human, and personalized. In the past, we may have been able to provide a chatbot with a list of different greetings to select from, possibly adding a bit of personalization, such as a first name. With generative AI, we can take this even further by providing some basic information about the customer, such as name, location, if they have purchased from us in the past, and ask for a customized greeting.

For example, imagine a fictitious company, AnyCompany Fitness, has a customer named Susan, who lives in Seattle, and has purchased items from them in the past. When leveraging Amazon Bedrock, we can include parameters, such as prompts like “You are answering customer calls.”, “Our company is AnyCompany Fitness.”, “Be personal and friendly”, or “Respond as if you were a Millennial.”. Then, provide customer specific information and a request, such as “A returning customer, Susan, is calling our company, AnyCompany Fitness. Susan lives in Seattle. Find out how why they are calling.” This prompt, depending on other parameters that you set, could provide responses such as:

  • “Hello, this is Jean, the AI assistant for AnyCompany Fitness. How can I help you today, Susan?”
  • “Yo, Susan! What’s up, Seattle? This is Jean, the AI assistant at AnyCompany Fitness. Welcome back, girl! How can I help you today?”
  • “Good afternoon, this is Jean, an AI assistant at AnyCompany Fitness. Welcome back, Susan! How can I assist you on this fine Wednesday?”

This can be applied where desired based on the tone and image of your company or brand, providing the ability to more dynamically personalize messaging than ever before. These personalizations do not need to be limited to messaging. The more you know about your customer, the more personalization you can provide. Have a large library of hold music and want to dynamically choose a song that might most resonate with Susan? Do you know when she was born? Maybe ask AI to select a song from your library that was popular during Susan’s junior year of high school and play that as the hold music. Depending on your business and customer relationship, the possibilities are almost endless.

As users are becoming more and more comfortable with AI chatbots responding to human inputs, they are starting to talk to them more like a person. Amazon Lex now supports generative AI assisted slot resolution, which can take an ambiguous input from a contact and map it to an intent. For example, if I am making a dinner reservation, the chatbot might ask me how many people are in my party. If I respond “two” the traditional AI will handle that perfectly fine. But what if I say “my wife and I”, a traditional AI model will not map that into the correct value. The generative AI-assisted slot will be able to extrapolate the right response and return “two” to the reservation engine. This is all done with managed generative AI models that require no tuning or building.

Another new feature is a Descriptive Bot Builder that uses foundation models (FMs) from Amazon Bedrock to quickly create a bot structure in minutes. Developers simply describe the bot’s intended tasks and queries in natural language. Amazon Lex then generates sample intents, slots, and utterances to build the bot. For example, a developer could explain that the bot should handle customer food orders by capturing menu items, quantities, and sizes. It should also check order status and cancel orders. Amazon Lex would then create an initial bot to fit those parameters for a developer to review and further customize. To use Descriptive Bot Builder, sign up for Amazon Bedrock and log into the Amazon Lex Console. Click “Create Bot” and select the option for “Generative AI – Descriptive Bot Builder.” Descriptive Bot Builder is available in US East (N. Virginia) and US West (Oregon) regions for Amazon Lex V2. This feature is only available for English speaking locales to start. To learn more, please see the documentation for Descriptive Bot Builder.

Finally, Amazon Lex introduced a new built-in intent that can detect customer inquiries and leverage Amazon Bedrock FMs to provide answers during bot conversations. This intent utilizes the generative AI capabilities of Amazon Bedrock to identify customer questions and search for responses from various knowledge sources (e.g., “What are the baggage restrictions for my international flight?”). This feature simplifies the process of configuring questions and answers using task-oriented dialogue within Amazon Lex V2 intents. Additionally, this intent can recognize follow-up questions (e.g., “What about domestic flights?”) based on conversation history and provide appropriate responses. The native integration of Amazon Lex with Amazon Connect allows you to leverage Lex chatbots throughout your customer experience.

Explore how easy it is to leverage generative AI to improve your customer and agent experience

The best part of all of the contact center features we discussed is they require no special knowledge or training, nor do they require extensive development or building to take advantage of these possibilities. For example, with Amazon Q in Connect, simply add your knowledge base domain to Amazon Connect in the console. Generative AI-powered contact summaries are available by enabling Contact Lens with a few clicks. You can do the same with generative AI for end-customer self-service experiences. Simply activate these features and you can begin using generative AI to create and augment your chatbots. With Amazon Connect, value gained by adding generative AI to your customer experience is as close as a few clicks away, so start experimenting today!

In case you missed it, check out part 1 and 2 of this three-part generative AI for CX blog series:

Ready to transform your customer service experience with generative AI-powered Amazon Connect? Contact us.

About the authors:

Mike Wallace leads the Americas Solution Architecture Practice for Customer Experience at AWS.
Jason Douglas is a Principal Solutions Architect for Customer Experience at AWS.