This Guidance uses machine learning (ML) to help you build a churn prediction model using structured and unstructured data. Customer churn, or customer attrition, measures the number of customers that stop using one of your products or services. By using a model that forecasts churn, you can take preventative action to identify behaviors and patterns that indicate churn probability for a set of customers. This Guidance can help business-to-business (B2B) organizations that use customer feedback and relationships to better understand customer satisfaction.

Please note: [Disclaimer]

Architecture Diagram

[text]

Download the architecture diagram PDF 

Well-Architected Pillars

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

  • This Guidance incorporates text data to enrich the dataset used to create a SageMaker model to predict a customer’s risk of churn. Amazon S3 stores the churn results as CSV files, and Athena queries the results without requiring additional operational overhead. Amazon SNS sends automated analysis reports to decision-makers so they can quickly act and reduce the likelihood of customer churn. 

    Read the Operational Excellence whitepaper 
  • AWS Identity and Access Management (IAM) controls access to data, ML models, and churn insights through granular permissions based on roles. Additionally, SageMaker can only access data through Amazon Virtual Private Cloud (Amazon VPC) endpoints. This means that data does not travel across the public internet, limiting potential points of data exposure. 

    Read the Security whitepaper 
  • SageMaker uses distributed training libraries to reduce training time and optimize model scaling. SageMaker also initiates batch transformation tasks across multiple Availability Zones to reduce risk of failure during training. If one Availability Zone fails, training can continue across another Availability Zone. Additionally, Athena, QuickSight, and AWS Glue are serverless services, making it easy to scale data queries and visualizations without you having to worry about provisioning additional infrastructure. 

    Read the Reliability whitepaper 
  • SageMaker batch inference allows you to process batches of data so you can run churn analysis on a set of customers at a time, rather than requiring you to have an endpoint up and running at all times. To support spikes in batch inference workloads, Lambda provides serverless compute that automatically scales based on demand. 

    Read the Performance Efficiency whitepaper 
  • To help reduce costs, AWS Glue jobs are used for extract, transform, and load (ETL) on a batch of user data rather than individual records. Additionally, Lambda processes events to start batch transformation analysis so that you can spin up compute capacity only as needed rather than having a server running at all times. A combination of AWS Glue, Athena, and QuickSight consume churn insights as the most cost-effective way to read batched data stored in Amazon S3

    Read the Cost Optimization whitepaper 
  • By extensively using serverless services, such as Lambda, AWS Glue, Athena, and QuickSight, you maximize overall resource utilization as compute is only used as needed. These serverless services scale to meet demand, reducing the overall energy required to operate the workload. You can also use the AWS Billing Conductor carbon footprint tool to calculate and track the environmental impact of the workload over time at an account, Region, and service level.

    Read the Sustainability whitepaper 

Implementation Resources

A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.

The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.

AWS Architecture
Blog

Title

Subtitle
Text.
 
This post demonstrates how...

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.

Was this page helpful?