AWS Database Blog

Schneider Electric automates Salesforce account hierarchy management with generative artificial intelligence (AI) using Amazon Aurora and Amazon Bedrock

This post was co-written with Anthony Medeiros, Manager of Solutions Engineering and Architecture for North America Artificial Intelligence at Schneider Electric; and Somik Chowdhury, Solution Architect at Schneider Electric.

Schneider Electric is a leader in digital transformation in energy management and industrial automation. To best serve customers’ needs, Schneider Electric needs to keep track of the connections between related customers’ accounts in their customer relationship management (CRM) systems. As its customer base grows, new customers are added daily and the Schneider Electric account teams have to manually sort through these new customers and link them to the proper existing parent entity. Creating an accurate hierarchy of customer accounts that clearly describes relationships between different companies, their subsidiaries, branches, and geographies is an essential part of sales operations. It is necessary to effectively target and manage sales efforts, optimize resource allocation, and enhance customer relationship management.

To effectively manage customer account hierarchies in its CRM at scale, Schneider Electric started leveraging advances in generative artificial intelligence (AI) large language models (LLMs) in April 2023. They created a solution to make timely updates to their customer account hierarchies in their CRM by linking customer account information to the correct parent company based on the latest information retrieved from the Internet and proprietary datasets. Without automation such research typically takes a person seven minutes per account. With 1.1 million accounts in scope, such work requires dedicated full-time resources. The first implementation relied on a Flan T5 LLM in Amazon SageMaker Jumpstart. It proved the feasibility of the overall project and provided the team with valuable insights and lessons. For more information about the original solution, refer to “Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their CRM systems.”

In this post, we explore further iterations of this project and how the team applied what they learned to the Salesforce CRM system using Amazon Aurora and Amazon Bedrock.

Before diving deeper, let’s cover this solution’s critical services and features.

Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API. It also provides the broad set of capabilities needed to build generative AI applications with security, privacy, and responsible AI.

Amazon Aurora

Amazon Aurora Serverless V2 is a modern serverless relational database service offering performance and high availability at scale, fully open-source MySQL- and PostgreSQL-compatible editions, and a range of developer tools for building serverless and machine learning (ML)-driven applications. Amazon Aurora PostgreSQL-compatible edition supports the pgvector extension to store embeddings from machine learning (ML) models in your database for efficient similarity searches. Pgvector enables Aurora to efficiently store and quickly retrieve high-dimensional vector data with low latency. It allows you to build ML capabilities into your database system to support capabilities such as finding similar items within a catalog or providing movie recommendations.

Schneider Electric leverages pgvector in its account hierarchy application to efficiently store embeddings generated using LangChain, an open-source framework for building LLM applications that integrates databases with vector search capabilities from various providers.

Solution Overview

Mapping account names to the correct parent account name cannot be accomplished using exact search techniques because there are different possible spellings (or typos at data entry) like “Globex Inc.” and “GlobexInc,” and geographies like “Acme Ltd. (US)” and “Acme Ltd. (UK)” that must be taken into account in order to result in the correct mapping. This process, which relies on finding the most similar information, can be accomplished using generative AI LLMs. With Hugging Face sentence-transformers framework, the account names are mapped to parents, and the existing account names are converted into a numerical representation or vector known as an embedding. These embeddings are compared using a similarity search in a database to find the most mathematically similar parent and child account names. Therefore, reliable, high-performance, and searchable storage for the embeddings is critical for this LLM application.

Schneider Electric was already using Aurora Serverless v2 as the primary data store for the Account Hierarchy project, taking advantage of the lean and agile serverless architecture while using the widely adopted PostgreSQL engine. When AWS introduced support for pgvector, it opened up a natural opportunity to store embeddings directly in existing database. This approach offered several key benefits, including reduced operational overhead by eliminating the need to maintain a separate vector database. It also simplified a technology stack, making it simplier for developers and operations engineers to work with a single data store. Moreover, consolidating data storage into a single database allowed for cost optimization, as we no longer had to provision and manage additional resources for dedicated vector databases.

The following diagram illustrates the solution architecture and workflow.

Solution Architecture

Let’s look at the architecture step by step:

  1. The process begins with a daily AWS Batch job that pulls Salesforce account information from the corporate data lake, prepares it, and flags accounts that don’t have a clear position in the account hierarchy (no parent company and no confirmation that they are standalone company or parent themselves).
  2. A second AWS Batch job iterates through the flagged accounts in Aurora and creates recommendations for an account hierarchy based on inference from Anthropic Claude 3 on Amazon Bedrock and calls to third-party APIs that contain corporate entity information. (We will cover this process in detail shortly.)
  3. The second batch job also sends the resulting recommendation and reasoning to the Aurora database and an Amazon Simple Storage Service (Amazon S3)
  4. A custom Streamlit application, hosted on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, displays recommendations to the end-users, using Aurora Serverless and Amazon S3 as a backend.
  5. Users review the recommendation and detailed reasoning and can either accept or decline the recommendation where the account should be placed.
  6. At the end of the working day, users download the accepted recommendations and send them to the data steward to upload back to Salesforce.
  7. From there, the updated dataset goes back to the corporate data lake, capturing relationships between the accounts – Ultimate Parent, Parent, and Children.

From the LLM perspective, the most critical part happens in the second AWS Batch job. The job runs sequential reasoning during this step to create a final prompt, as illustrated in the following figure. Prompts are a critical part of efficiently adding company specific data to the knowledge base of an LLM.

Prompts flow

First, it queries the search engine results page (SERP) to search for the latest information about candidates for the account parent. Then, it passes the search results and original question to Anthropic Claude 3 to interpret the results and infer the parent’s name. Then, it calls the Dun & Bradstreet API to find a match and evidence of a relationship between the target account and its parent. That data is used for a vector search in Aurora to find the correct parent entity in the existing hierarchy or confirm that there is no parent entity to which this company can link. The output of all three operations is then used to create a final prompt for Anthropic Claude 3 to formulate the recommendation and provide a chain of thought for reasoning. If an account has no parent and is not found in the hierarchy, it is marked as a standalone account. In the latest iteration of this architecture, the team uses Claude 3 Sonnet because it allows for a sufficient balance between speed, cost, and inference quality. Throughout this project, Amazon Bedrock also supports rapidly upgrading to the latest and best models on the market as they become available (from Anthropic Claude 2 to Claude 2.1 and, eventually, to Claude 3). Furthermore, it supported the ability to mix and match the models depending on the task at hand, speed, cost, and performance (in early versions, the team used Anthropic Claude Instant for fast interpretations of results from API calls and Anthropic Claude 2/2.1 for handling the final prompt).

Key decisions and benefits

Through the tests, the team confirmed that using Amazon Bedrock for generative AI and Aurora as the vector store resulted in a 60% cost reduction in the account hierarchy management process for Schneider. Amazon Bedrock also brought unique advantages to the system.

First, it simplified using two separate LLM models, Anthropic Claude 2 (a robust model good at complex reasoning) to identify where the account fits in the existing hierarchy and the detailed rationale for the decision and Anthropic Claude Instant (lighter and faster) to provide a faster response to a targeted question, such as, “What is the parent company for company X?”.

Second, Amazon Bedrock gives your application a critical flexibility to choose, combine, or change different LLMs – to achieve a perfect fit of performance and cost for your use case. The team used such flexibility to upgrade from Anthropic Claude 2 to Claude 2.1 quickly and then to Claude 3 to reduce hallucinations significantly. Finally, the overall strategy to use managed and serverless AWS services helped minimize management overhead and keep costs low while staying agile and supporting virtually any scale.

Overall, the solution reduced the time spent researching an account and placing it into a hierarchy by 57% (from seven minutes per account to three minutes). This metric is expected to improve even further over time as this project continues improving its data quality with a more accurate account hierarchy.

Conclusion

Schneider Electric’s success story highlights how customers continue to innovate using AWS services. Aurora Serverless v2 with pgvector and Anthropic Claude 3 Sonnet on Amazon Bedrock enabled Schneider Electric to build and optimize an efficient and cost-effective account hierarchy solution. They were able to leverage the autoscaling capabilities of Aurora Serverless to match usage spikes automatically and the latest extensions, like pgvector, to optimize their operational overhead. The pgvector extension allowed tight integration between LLM, embeddings, inferences, and Schneider Electrics’ relational data. Amazon Bedrock makes it simple to use the latest machine learning models and pick the right one for your use case (balancing performance and cost without compromising quality).


About the Authors

Anton Gridin PhotoAnton Gridin is a Principal Solutions Architect supporting Global Industrial Accounts based out of New York City. He has over 17 years of experience building secure applications and leading engineering teams.

Anthony Medeiros PhotoAnthony Medeiros is a Manager of Solutions Engineering and Architecture at Schneider Electric. He specializes in delivering high-value AI/ML initiatives to many business functions within North America. With 17 years of experience at Schneider Electric, he brings a wealth of industry knowledge and technical expertise to the team.

Somik Chowdhury PhotoSomik Chowdhury is a seasoned Solution Architect at Schneider Electric with 12 years of industry experience, working for the North America AI team. He specializes in designing and implementing AI/ML-driven solutions for businesses, particularly leveraging AWS technologies. Somik’s expertise lies in building and deploying innovative artificial intelligence applications that address complex business challenges and drive organizational growth.