This Guidance demonstrates how nonprofits associations and membership organizations can proactively understand which members are likely to allow their membership to lapse and the reasons for the same, using AWS Data Lake and artificial intelligence/machine learning (AI/ML) services
Architecture Diagram
Step 1
Donor and member data is collected from multiple data sources across the organization. Demographic information, history on past donations and engagement with the nonprofit, and data found in the Donor Relationship Management/Customer Relationship Management (DRM/CRM) software all may be useful indicators.
Step 2
Depending on the type, location, and format of the data source, AWS Database Migration Service (AWS DMS), AWS DataSync, and/or Amazon AppFlow are used to ingest the data into a data lake in AWS.
Step 3
Amazon Simple Storage Service (Amazon S3) is used for data lake storage.
Step 4
AWS Glue can be used to extract, transform, catalog, and ingest data across multiple data stores. AWS Glue DataBrew or Amazon SageMaker Data Wrangler could also be used for visual data preparation. Use AWS Lambda for enrichment and validation.
Step 5
Data is placed back into Amazon S3 post-processing for consumption.
Step 6
SageMaker Canvas is a an optional no-code solution to visualizing features and training a ML model. SageMaker Studio can be used independently or in tandem with SageMaker Canvas to further build, tune, and deploy the model.
Step 7
Batches of members or donors can be uploaded into Amazon S3, triggering a Lambda function to run inference using SageMaker Batch Transform, generating a list of the members that are most likely to churn. The output is then dropped back into Amazon S3.
Step 8
Amazon Quicksight can be used to visualize this list and target individuals for next steps.
Step 9
Use Amazon Pinpoint for member engagement, defining segments of members and reaching out to them with proactive, personalized messages through the channel of their choice.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This guidance can be deployed with infrastructure as code and automation for fast iteration and consistent deployments. Use Amazon CloudWatch for application and infrastructure monitoring. Use Amazon SageMaker Model Monitor and Amazon SageMaker Clarify to track bias and model drift.
-
Security
Use AWS Identity and Access Management (AWS IAM) to ensure users and services have least privilege access, especially to sensitive donor or member data in Amazon S3. Use Amazon Macie to identify possible sensitive data, and obfuscate or remove irrelevant data before using in SageMaker. Use Amazon Virtual Private Cloud (Amazon VPC) to enable connectivity to resources from only the services and users that are needed.
-
Reliability
Most services used in the architecture are serverless, and are deployed with high availability by default. Use continuous integration/continuous delivery (CI/CD) practices, and SageMaker Pipelines to automate model development, deployment and management. Collect and automate action on metrics collected in CloudWatch.
-
Performance Efficiency
Use monitoring to generate alarm-based notifications using CloudWatch, and adjust resources accordingly. Use SageMaker Experiments to optimize algorithms and features.
-
Cost Optimization
User SageMaker Studio auto shutdown to avoid paying for unused resources. Start training with small quantities of data. Use Amazon S3 storage classes appropriately, based on data access patterns.
-
Sustainability
Use managed services when possible, to shift the responsibility of optimizing hardware to AWS. Shut down resources when not in use.
Sample Code
Start building with this sample code. Learn how to automate business processes which presently rely on manual input and intervention across various file types and formats.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.