AWS Contact Center

Generative AI unlocks 60% faster Japanese VoC analysis with Amazon Connect Contact Lens

Elevating customer experiences is paramount in today’s competitive landscape. By harnessing the power of generative AI, ANA X has transformed its approach to Voice of Customer (VoC) analysis, unlocking unprecedented insights and driving strategic improvements across its contact center operations with Amazon Connect. This innovative journey showcases how cutting-edge technology can revolutionize the way businesses understand and cater to their customers’ evolving needs.

Key outcomes:

  • Gained granular insights previously overlooked with generative AI
  • 60% reduction in analysis time

About ANA X and ANA Systems

ANA X is the platform business company of the ANA Group. It utilizes the group’s customer base from aviation and travel to create services that enhance customers’ lives and address regional and social issues through human mobility.

ANA Systems, the IT professional arm of the ANA Group, supports the organization with advanced technological capabilities. Their CX Management Department Customer Communication Team (ASY) operates the contact center system that supports ANA X’s business.

Introduction: Why do we need VoC Analysis?

In today’s interconnected world, ANA X bridges the gap between physical and digital realms, offering travel products and mileage program services through various touchpoints. However, the true value of our offerings extends far beyond the products themselves. It encompasses the entire customer journey – from pre-purchase research to post-purchase support, website and app usability, and every interaction in between.

Our contact center serves as a crucial nexus for customer inquiries, providing invaluable insights into the customer experience. By meticulously analyzing these inquiries through VoC analysis, we gain a comprehensive understanding of our customers’ needs, pain points, and expectations. This deep insight drives our decision-making process, enabling us to implement improvements that not only enhance contact center efficiency but also elevate the overall user-friendliness of our services – a key factor in our business success.

Traditionally, we leveraged Amazon Connect Contact Lens feature and text mining techniques to analyze handling times and inquiry reasons for workforce management improvements. While these methods provided valuable insights, we recognized an opportunity to further enhance the understanding of our customer needs and further improve operational efficiency.

Recognizing these limitations, we turned our attention to the transformative potential of generative AI technology. This innovative approach has empowered us to accurately identify and analyze subtle trends and patterns that often eluded conventional analysis methods. In this blog post, we’ll explore how generative AI, particularly the latest large language model (LLM) technology, is revolutionizing our VoC analysis in contact centers.

Challenges in VoC analysis for contact centers

After introducing Amazon Connect in 2022, we have been analyzing the transcribed data from the Contact Lens feature using text mining tools for VoC analysis, but there were several challenges.

Verbatim transcription data

Verbatim transcription involves transcribing audio as is, without omitting meaningless words such as “um,” “er,” and “uh.” While this conveys the atmosphere and nuance of the conversation, analyzing the data directly with text mining tools can result in meaningless words appearing as top words, hindering the analysis.

Call reason classification

Call reasons are classified based on predetermined categories by the call handlers’ subjective assessment after the call, leading to a disconnect from actual inquiry trends. With text mining, we were performing detailed classification based on frequently occurring words. Since it focused solely on frequently occurring words without considering their context, meaning, and relevance to business processes, the classifications often required revisions. Ultimately, the text mining approach still resulted in classifications that depended heavily on the subjective perspective of the analyst interpreting the results.

Time-consuming analysis

To drive service improvements, we needed to conduct more detailed analyses and visually confirm call contents, which took approximately three days for confirming the contents of around 100 calls and one week for the overall analysis, including classification.

Solution: New VoC analysis method augmented Contact Lens with generative AI using Amazon Bedrock

We conducted VoC analysis using Amazon Bedrock (Anthropic’s Claude 3 Sonnet).

The call analysis method we implemented is as follows:

  1. Use the LLM to infer and extract the customer’s inquiry reason from each call.
  2. Use the LLM to classify the extracted inquiry reasons and create categories.
  3. Categorize each call’s inquiry into the created categories.
  4. Aggregate the classified call reasons.

Figure 1. Analysis flow using LLM

To efficiently analyze the vast amount of call transcription data, we executed Amazon Bedrock (Anthropic’s Claude 3 Sonnet) using Google Spreadsheets and Apps Script.

Figure 2. Call reason analysis for approximately 200 calls at ANA X

Effects and advantages

Automatic call Reason classification with LLM

Conventionally, analysts had to determine the classification method and categorize call reasons based on predefined items. With LLM, call reasons can be automatically classified. Additionally, LLM can recognize and classify data with frequent filler words and repetitions from verbatim transcriptions, considering the context.

Visualization of multiple call Reasons within a single call

After a call, the handler selects one call reason from a predetermined list for aggregation. In reality, multiple issues are often addressed in a single call, but information beyond the main call content is not aggregated. While the system supports multiple selections, prioritizing handler efficiency and optimizing call times led to a single selection approach.

Using LLM for analysis allows visualizing detailed call reasons that were previously overlooked, enabling more granular analysis.

Reduced time for manual classification and detailed analysis (60% reduction)

Previously, analysts had to manually revise classifications and visually confirm call transcription data for detailed analysis. With LLM-based call reason classification, reviewing summaries can significantly reduce the workload.

Reduced workload for communicators

As a secondary effect, call handlers no longer need to input call reasons after each call, potentially reducing after-call work (ACW) time.

LLM implementation and learning process

For those considering implementing generative AI, deciding where to start can be challenging. We faced the same dilemma but were able to proceed smoothly by following the steps outlined below.

Exploring LLM possibilities

To explore the possibilities of LLM, we started by participating in a prompt contest hosted by Amazon Web Services (AWS). ANA X (the user department) and ASY (the system department) participated together in this contest, where we used an AI workbook integrating Amazon Bedrock and Google Spreadsheets to create prompts and gain valuable experience in understanding generative AI fundamentals and prompt writing.

Internal prompt workshop

Building on the learnings from the AWS prompt contest, we organized an internal prompt workshop as the next step. In this workshop, we followed a format similar to the AWS prompt contest but used anonymized real call data, creating a more practical setting. Participants from different departments formed teams, combining diverse perspectives to create prompts.

Figure 3. AWS prompt contest

Figure 4. AI Workbook used in the prompt contest

Success points and lessons learned

Importance of real data

Using actual call data helped bridge the gap between theory and practice, enabling the creation of more practical prompts. Real data resonates better as the content is relatable. We could concretely understand how LLM-based analysis could contribute to solving actual business challenges. Having “real data” is crucial for generative AI analysis.

Diversity and collaboration unlocking LLM potential

The collaboration between the user department and system department members brought new perspectives to LLM-based VoC analysis.

In the prompt contest and workshop, the challenge was to summarize call contents. During the final presentations, the leader-class teams presented “summaries of a few lines,” while the frontline teams presented “summaries following a chronological order” and “summaries based on the 5W1H principle.” This highlighted that even within the user department, desired data can vary among users. Additionally, digging deeper into each team’s prompts revealed that keywords like “subject” or “chronological order” proposed by the user department had a strong influence, indicating that mere technique listings could not solve the problem.

This diversity led to the development of effective analysis methods and a multi-faceted understanding of LLM’s potential. Cross-functional collaboration fostered innovative solutions and valuable insights, demonstrating that collaborative efforts across departments can expand the possibilities of LLM utilization.

Focus on prompt engineering

To maximize LLM’s performance, prompt engineering is essential. Instead of spending excessive time selecting an LLM, we focused on designing appropriate prompts rather than which LLM to use. We prioritized creating practical prompts that could yield outputs useful in real-world operations. Additionally, collaborating with the “user department” members who intimately understand the challenges was more efficient than working solely with the system department.

Focus on what existing tools can do

While exploring new tools and solutions, we emphasized utilizing existing tools. Specifically, we executed Amazon Bedrock using Google Spreadsheets and Apps Script to validate analysis methods. This approach allowed us to conduct experiments quickly while keeping costs down.

Conclusion

Instead of agonizing over which generative AI to choose, it is more important to consider how to design and streamline operations with generative AI as a premise and to practice while considering this. Moving forward, how AI and humans can collaborate to produce more valuable outcomes will become a crucial factor in determining a company’s competitiveness. There are still many areas within the ANA Group’s operations where generative AI can drive process improvements. By leveraging generative AI technology and integrating the real and digital worlds across our diverse touchpoints, we will provide personalized service experiences for each customer, whether they interact with us in person or through our digital platforms.

Special Thanks

To those in our team who cooperated with this writing:

  • Yasuyuki Misawa, Leader, Planning and Promotion Team, Customer Relation Department, ANA X Inc.
  • Kentaro Higuchi, Manager, Planning and Promotion Team, Customer Relation Department, ANA X Inc.
  • Sakiko Fukada, Leader, Life Solution Team, Research & Development Department, ANA X Inc.
  • Ayako Sunayama, Passenger Service Systems Team, Passenger Service Solutions, Digital Transformation, All Nippon Airways Co., Ltd.
  • Tatsuo Kido, Manager, Customer Communication Team, CX Management Department, ANA Systems Co., Ltd.

Author bio

Emi Nochi
Customer Communication Team, CX Management Department, ANA Systems Co., Ltd.
Former airport staff (ANA Narita Airport Service Co., Ltd.), 3 years of IT experience

Company information


ANA Systems Co., Ltd.
https://www.anasystems.co.jp/
This post is a contributed article from a customer.