AWS for Industries

Iterable close the Last Mile gap with Amazon Bedrock

The phrase “last mile” is frequently used across industries to refer to the critical final step in delivering value. In the world of analytics, this ‘last mile’ means bridging the gap between insights and taking action, and explain, how specific actions can correspond to the larger picture.

As an artificial intelligence (AI)-powered customer communication platform, Iterable supports 1200+ organizations like (Volvo, Priceline, Calm, and Box) to deliver personalized cross-channel communications for over seven billion user profiles world-wide. Iterable’s customers come from a variety of industries, each with their own unique datasets and business goals.

The diversity in data often presents a common hurdle: how do you strike a balance between standardization and flexibility when classifying and translating bespoke datasets? At Iterable, within the AI and Data Intelligence Team, our ‘last mile’ challenges often revolve around data classification and integrating relevant information to support in-the-moment action and decision-making.

Enter Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies through a single API. Along with a broad set of capabilities you can build generative AI applications with security, privacy, and responsible AI. These cutting-edge tools are helping Iterable innovate and find ways to further close this gap.

AI Gardens: A Game Changer for Rapid Innovation

Sinead Cheung, Group Product Manager at Iterable shares, “Amazon Bedrock makes it easy for us to test and refine prompts as our models and datasets evolve.” Amazon Bedrock provides access to managed LLM from industry players like A21, Amazon, Anthropic, Cohere, Meta and Stability AI. This enables quick and straightforward switching between LLMs without the need to build or deploy any infrastructure for prompt testing.

Iterable explored a range of capabilities with Amazon Bedrock, but for illustrative purposes, let’s focus on Anthropic’s Claude Sonnet. Iterable’s product and engineering teams often need to evaluate proof-of-concepts quickly, especially when dealing with diverse customer dataset that resists easy analysis due to variance in topic, format and quality.

Nick Ma, Iterable’s Staff Machine Learning Engineer, reflects “One significant advantage of using Amazon Bedrock was our ability to integrate and iterate on different models directly within our AWS account and Amazon Bedrock instance. This environment made it easy to incorporate various tools while maintaining the security and control necessary for our customer’s data.”

Amazon Bedrock enabled Iterable’s AI and Data Intelligence Team to primarily rely on existing agreements with AWS, rather than needing to have new agreements drafted and approved in a typical vendor procurement process.

Incorporating AI in Segmentation Queries

Iterable’s Segmentation tool enables any marketer to build audience lists based on user characteristics, behavioral insights, and their preferences in a no-code experience. The result is hyper-targeted customer communications. However, what starts out as quick audience lists with a handful of attributes can grow in complexity as marketers refine and expand their targeting, making list management time intensive and primed for errors.

To solve this problem, Iterable set out to build a ‘universal list translator’ to help their marketers easily understand who each audience list is targeting without having to make sense of all of the different attributes being used. The proof-of-concept involved building a ‘universal translator’ tool for querying customer-specific events to support segmentation. Cheung shares, “Claude Sonnet enabled us to efficiently explore solutions, even when data manipulation was difficult.”

The Iterable AI and Data Intelligence Team applied this proof-of-concept in three different scenarios:

  1. Code-to-Text with ‘List Query Summarization’
  2. Text-to-Code with ‘User Requested AI Generated Queries’
  3. Code-to-Code by how closely Segmentation Queries were converted into ‘Predictive Goal Queries’

Helping marketers with List Query Summarization (Code-to-Text):

One of the primary challenges in creating dynamic audience lists is the complexity of segmentation queries. They can become lengthy and difficult to parse manually (especially when Iterable’s customers share these lists between team members or between team departments). To simplify this, Iterable explored whether Claude Sonnet could assist in summarizing these queries into more understandable, concise descriptions for marketers.

For example, a query to segment users who made purchases over a certain threshold, interacted with email campaigns, and exhibited high engagement scores would typically involve multiple conditions. Claude Sonnet was able to translate this complex code into plain language: “Users who have made purchases over $100, opened more than 50% of emails in the last 30 days, and have an engagement score of 80 or higher.”

This summary will make it more straightforward for Iterable’s marketers to understand their audience composition. It should also speed up decision-making processes by reducing the time spent decoding lengthy queries.

A JSON query that translated into a natural-language outputFigure 1: A JSON query that translated into a natural-language output

Segmentation List Query Summarization Design Mock Up

Figure 2: Segmentation List/Query Summarization Design Mock Up

User Requested Queries generated by AI (Text-to-Code)

In the next test, Iterable reversed the process. Could Claude Sonnet generate accurate, complex queries directly from user input in plain language? They tasked the model with creating SQL-like segmentation queries based on marketing requirements described by a user.

For example, when a user provided input such as, “Show me all users who have abandoned their shopping carts in the last seven days and have viewed more than three product pages.” Claude Sonnet generated the corresponding segmentation query without error. This was a critical breakthrough because it demonstrated that non-technical users could generate complex queries without needing in-depth knowledge of the tool or query language. The users didn’t need to use statements such as ‘contains’ or ‘and/or’ to unlock new possibilities for customer segmentation at scale with ease.

Iterable’s Current Segmentation Query Builder

Figure 3: Iterable’s Current Segmentation Query Builder

Segmentation Queries into Predictive Goal Queries (Code-to-Code)

Taking the concept one step further, Iterable explored whether Claude Sonnet could translate segmentation queries into predictive queries. This meant evaluating existing segmentation parameters and converting them into a more advanced, goal-oriented format that would inform predictive marketing campaigns.

In a research debrief meeting with Iterable, Sinead and Nick observed, “This type of crosswalk between two different data schemas has traditionally been difficult and required a lot of manual intervention. However, with Amazon Bedrock and Claude Sonnet, our prototype easily handled this use case on the first day of development with zero-shot learning (ZSL).”

For example, a query that segmented users based on past behavior was reinterpreted into a predictive query aiming to forecast future purchases or recommend personalized marketing strategies. In this scenario, LLMs have the potential to reduce the friction points where Iterable wanted to help their marketers move beyond their understanding of historical data to proactively influencing future behavior through predictive analytics.

Translate a Segmentation Query Code to a Predictive Goal Code to support custom predictionsFigure 4: Translate a Segmentation Query Code to a Predictive Goal Code to support custom predictions

Conclusion: Moving the Needle with generative AI

Generative AI and LLMs—particularly those offered through Amazon Bedrock—are helping Iterable tackle the ‘last mile’ in segmentation. In particular around reducing the level of effort in generating and summarizing queries for users and being able to convert those respective outputs to existing in-house predictive capabilities Iterable has across their platform.

AWS has not only enabled faster experimentation and reduced manual intervention at Iterable, but have also scaled solutions to meet the diverse needs of their customers. According to Sinead at Iterable, “We saw a 50% reduction in turn-around time in our R&D development and ideation timeline. Looking forward as we ‘productionalize’ this solution, we are encouraged by the initial results observed and the viability of the technology.”

Iterable’s exploration with generative AI is just the beginning of what’s possible as they close the gap between insights and action in a rapidly evolving landscape.

Contact an AWS Representative to know how we can help accelerate your business

Further Reading

Praachee Gokhale

Praachee Gokhale

Praachee Gokhale is a Solutions Architect at Amazon Web Services(AWS). She has a M.S. in Electrical Engineering from the San Jose State University. She is based in the San Francisco Bay Area and is passionate about containers and generative AI.

Nick Ma

Nick Ma

Nicholas Ma is a Staff Machine Learning Engineer at Iterable, blending his expertise in AI and data science to build innovative solutions that supercharge customer engagement. With a PhD in Mathematics and a knack for crafting scalable, AI-powered marketing strategies, he’s helping brands connect with their audiences in smarter ways.

Sinead Cheung

Sinead Cheung

Sinead is the Group Product Manager at Iterable’s Data Intelligence Product Team. She has 10+ years of product management experience in delivering enterprise-grade AI/ML solutions and data intelligent products that intersect with human-centric design.