Skip to main content

AWS Solutions Library

  • AWS Solutions Library
  • Guidance for Recency, Frequency & Monetization (RFM) Analysis on Amazon Pinpoint

Guidance for Recency, Frequency & Monetization (RFM) Analysis on Amazon Pinpoint

Uncover RFM scores to reveal customer behavior and monetary value

Overview

This Guidance demonstrates how to use recency, frequency, and monetization (RFM) to implement a customer data pipeline on AWS. It ingests behavioral data from a storage service, uses a machine learning service to calculate RFM scores and segments, then uploads the segments to a marketing communications service through data preparation and serverless compute services. The marketing service enables targeted messaging campaigns based on the automatically generated RFM segments. By using these cloud services, you can extract valuable customer insights to drive personalized marketing experiences at scale.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

This Guidance uses serverless services that reduce operational overhead and provide automated scaling capabilities. For example, the compute workloads run on a fully managed infrastructure that is highly available across multiple Availability Zones, minimizing administration efforts. Integrated logging and monitoring tools provide observability into the application's health and performance. And lastly, the infrastructure as code templates enable consistent, repeatable deployments through standardized continuous integration and continuous delivery (CI/CD) workflows.

Read the Operational Excellence whitepaper 

This Guidance adopts security best practices by implementing robust access controls, data encryption, and adherence to least privilege principles. AWS Identity and Access Management (IAM) provide temporary, rotated credentials through roles to securely grant access across multiple services. All data storage on Amazon S3 is encrypted at rest and in transit, with policies enforcing authenticated access only. Amazon Pinpoint handles your data securely by encrypting data in motion and at rest while preventing the exposure of personally identifiable information (PII). By using the security capabilities of AWS services, you can protect sensitive data assets, restrict unauthorized access, and meet compliance requirements throughout the customer segmentation workflow.

Read the Security whitepaper 

Through fault-tolerant services, decoupled compute workflows, and robust deployment processes, the capabilities in this Guidance enable your workloads to perform their intended functions correctly and consistently. Lambda provides highly available and automatically scalable compute capabilities, while Step Functions orchestrates the end-to-end workflow across multiple stateless services like SageMaker and AWS Glue. This decoupled model enables independent scaling and retries for each processing step. In addition, asynchronous invocations and queuing mechanisms prevent request losses, while integrated logging captures errors for analysis. Finally, the AWS Serverless Application Model (AWS SAM) simplifies application deployments through infrastructure as code and offers testing and rollback capabilities.

Read the Reliability whitepaper 

The elastic and configurable scaling capabilities of AWS serverless services, such as Lambda and Step Functions, can scale compute horizontally to process multiple file uploads in parallel. SageMaker training and processing jobs, along with AWS Glue jobs, allow you to configure resource sizing based on projected data volumes. This flexibility enables right-sizing compute capacity for optimal performance. The decoupled, orchestrated workflow empowers you to experiment by adding, removing, or modifying individual processing stages without impacting the entire pipeline. By taking advantage of these scalable and modular architectures, you can optimize performance while only utilizing the resources needed.

Read the Performance Efficiency whitepaper 

Lambda, SageMaker, and AWS Glue provision compute capacity on-demand, billing only for the duration of actual job run time. This serverless approach eliminates costs from persistently overprovisioned infrastructure. Furthermore, the deployment allows for the configuration of optimal resource sizing parameters to match workload demands. By avoiding underutilized resources and using the consumption-based pricing model of AWS, you can optimize costs while accessing high-performance analytics capabilities. This event-driven Guidance ensures that batch processing jobs run only when invoked by new data in Amazon S3, minimizing unnecessary compute spend.

Read the Cost Optimization whitepaper 

This Guidance helps you minimize unnecessary data movement, allowing for the efficient use of hardware resources and avoiding overprovisioning. For example, Amazon S3 acts as a centralized data lake, avoiding redundant copies and reducing data transfers. Data partitioning in Amazon S3 enables lifecycle policies to automatically transition aging data to lower-cost storage tiers.

Additionally, SageMaker provisions compute resources elastically to match workload demands, preventing overprovisioning. Its managed infrastructure intelligently selects compute instance types optimized for the machine learning algorithms used. The AWS Cloud model allows for efficient decommissioning of hardware once jobs are complete. By using these cloud capabilities, you can reduce the environmental impact associated with idle resources, unoptimized data storage, and unnecessary data transfers involved in analytics workloads.

Read the Sustainability whitepaper 

Deploy with confidence

Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs. 

Go to sample code

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.

Did you find what you were looking for today?

Let us know so we can improve the quality of the content on our pages