AWS Messaging & Targeting Blog

Build Better Engagement Using the AWS Community Engagement Flywheel: Part 2 of 3

Introduction

Part 2 of 3: From Cohorts to Campaigns

Businesses are constantly looking for better ways to engage with customer communities, but it’s hard to do when profile data is limited to user-completed form input or messaging campaign interaction metrics. Neither of these data sources tell a business much about their customer’s interests or preferences when they’re engaging with that community.

To bridge this gap for their community of customers, AWS Game Tech created the Cohort Modeler: a deployable solution for developers to map out and classify player relationships and identify like behavior within a player base. Additionally, the Cohort Modeler allows customers to aggregate and categorize player metrics by leveraging behavioral science and customer data. In our first blog post, we talked about how to extend Cohort Modeler’s functionality.

In this post, you’ll learn how to:

  1. Use the extension we built to create the first part of the Community Engagement Flywheel.
  2. Process the user extract from the Cohort Modeler and import the data into Amazon Pinpoint as a messaging-ready Segment.
  3. Send email to the users in the Cohort via Pinpoint’s powerful and flexible Campaign functionality.

Use Case Examples for The Cohort Modeler

For this example, we’re going to retrieve a cohort of individuals from our Cohort Modeler who we’ve identified as at risk:

  • Maybe they’ve triggered internal alarms where they’ve shared potential PII with others over cleartext.
  • Maybe they’ve joined chat channels known to be frequented by some of the game’s less upstanding citizens.

Either way, we want to make sure they understand the risks of what they’re doing and who they’re dealing with.

Pinpoint provides various robust methods to import user contact and personalization data in specific formats, and once Pinpoint has ingested that data, you can use Campaigns or Journeys to send customized and personalized messaging to your cohort members – either via automation, or manually via the Pinpoint Console.

Architecture overview

In this architecture, you’ll create a simple Amazon DynamoDB table that mimics a game studio’s database of record for its customers. You’ll then create a Trigger for Amazon Simple Storage Service (Amazon S3) bucket that will ingest the Cohort Modeler extract (created in the prior blog post) and convert it into a CSV file that Pinpoint can ingest. Lastly, once generated, the AWS Lambda function will prompt Pinpoint to automatically ingest the CSV as a static segment.

Once the automation is complete, you’ll use Pinpoint’s console to quickly and easily create a Campaign, including an HTML mail template, to the imported segment of players you identified as at risk via the Cohort Modeler.

Prerequisites

At this point, you should have completed the steps in the prior blog post, Extending the Cohort Modeler. This is all you’ll need to proceed.

Walkthrough

Messaging your Cohort

Now that we’ve extended the Cohort Modeler and built a way to extract cohort data into an S3 bucket, we’ll transform that data into a Segment in Pinpoint, and use the Pinpoint Console to send a message to the members of the Cohort via a Pinpoint Campaign. In this walkthrough, you’ll:

  • Create a Pinpoint Project to import your Cohort Segments.
  • Create a Dynamo table to emulate your database of record for your players.
  • Create an S3 bucket to hold the cohort contact data CSV file.
  • Create a Lambda trigger to respond to Cohort Modeler export events and kick off Pinpoint import jobs.
  • Create and send a Pinpoint Campaign using the imported Segment.

Create the Pinpoint Project

You’ll need a Pinpoint Project (sometimes referred to as an “App”) to send messaging to your cohort members, so navigate to the Pinpoint console and click Create a Project.

  • Sign in to the AWS Management Console and open the Pinpoint Console.
  • If this is your first time using Amazon Pinpoint, you will see a page that introduces you to the features of the service. In the Get started section, you’ll need to enter the name you want to call your project. We used ‘CohortModelerPinpoint‘ but you can use whatever you’d like.
  • On the following screen, the Configure features page, you’ll want to choose Configure in the Email section.
    • Pinpoint will ask you for an email address you want to validate, so that when email goes out, it will use your email address as the FROM header in your email. Enter the email address you want to use as your sending address, and Choose Verify email address.
    • Check the inbox of the address that you entered and look for an email from no-reply-aws@amazon.com. Open the email and click the link in the email to complete the verification process for the email address.
    • Note: Once you have verified your email identity, you may receive an alert prompting you to update your email address’ policy. If so, highlight your email under All identities, and choose Update policy. To complete this update, Enter confirm where requested, and choose Update.

  • Later on, when you’re asked for your Pinpoint Project ID, this can accessed by choosing All projects from the Pinpoint navigation pane. From there, next to your project name, you will see the associated Project ID.

Create the Dynamo Table

For this step, you’re emulating a game studio’s database of record for its players, and therefore the Lambda function that you’re creating, (to merge Cohort Modeler data with the database of record) is also an emulation.

In a real-world situation, you would use the same ingestion method as the S3TriggerCohortIngest.py example that will be created further below. However, instead of using placeholder data, you would use the ‘playerId’ information extracted from the Cohort Modeler. This would allow you to formulate a specific query against your main database, whether it requires an SQL statement, or some other type of database query.

Creating the Table

Navigate to the DynamoDB Console. You’re going to create a table with ‘playerId’ as the Primary key, and four additional attributes: email, favorite role, first name, and last name.

  • In the navigation pane, choose Tables. On the next page, in the Tables section, choose Create table.
  • In the Table details section, we entered userdata for our Table name. (In order to maintain simple compatibility with the scripts that follow, it is recommended that you do the same.)
  • For Partition key, enter playerId and leave the data type as String.
  • Intentionally leave the Sort key blank and the data type as String.
  • Below, in the Table settings section, leave everything at their Default settings value.
  • Scroll to the end of the page and choose Create table.
Adding Synthetic Data

You’ll need some synthetic data in the database, so that your Cohort Modeler-Pinpoint integration can query the database, retrieve contact information, and then import that contact information into Pinpoint as a Segment.

  • From the DynamoDB Tables section, choose your newly created Table by selecting its name. (The name preferably being userdata).
  • In the DynamoDB navigation pane, choose Explore items.
  • From the Items returned section, choose Create item.
  • Once on the Create item page, ensure that the Form view is highlighted and not the JSON view. You’re going to create a new entry in the table. Cohort Modeler creates the same synthetic information each time it’s built, so all you need to do is to create three entries.
    • For the first entry, enter wayne96 as the Value for playerID.
    • Select the Add new attribute dropdown, and choose String.
    • Enter email as the Attribute name, and the Value should be your own email address since you’ll be receiving this email. This should be the same email used to configure your Pinpoint project from earlier.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter favoriteRole as the Attribute name, and enter Tank as the attribute’s Value.
    • Again, select the Add new attribute dropdown, and choose String.
    • Enter firstName as the Attribute name, and enter Wayne as the attribute’s Value.
    • Finally, select the Add new attribute dropdown, and choose String.
    • And enter the lastName as the Attribute name, and enter Johnson as the attribute’s value.

  • Repeat the process for the following two users. You’ll be using the SES Mailbox Simulator on these player IDs – one will simulate a successful delivery (but no opens or clicks), and the other will simulate a bounce notification, which represents an unknown user response code.

 

A B C D E
1 playerId email favoriteRole firstName lastName
2 xortiz success@simulator.amazonses.com Healer Tristan Nguyen
3 msmith bounce@simulator.amazonses.com DPS Brett Ezell

Now that the table’s populated, you can build the integration between Cohort Modeler and your new “database of record,” allowing you to use the cohort data to send messages to your players.

Create the Pinpoint Import S3 Bucket

Pinpoint requires a CSV or JSON file stored on S3 to run an Import Segment job, so we’ll need a bucket (separate from our Cohort Modeler Export bucket) to facilitate this.

  • Navigate to the S3 Console, and inside the Buckets section, choose Create Bucket.
  • In the General configuration section, enter a bucket a name, remembering that its name must be unique across all of AWS.
  • You can leave all other settings at their default values, so scroll down to the bottom of the page and choose Create Bucket. Remember the name – We’ll be referring to it as your “Pinpoint import bucket” from here on out.
Create a Pinpoint Role for the S3 Bucket

Before creating the Lambda function, we need to create a role that allows the Cohort Modeler data to be imported into Amazon Pinpoint in the form of a segment.

For more details on how to create an IAM role to allow Amazon Pinpoint to import endpoints from the S3 Bucket, refer to this documentation. Otherwise, you can follow the instructions below:

  • Navigate to the IAM Dashboard. In the navigation pane, under Access management, choose Roles, followed by Create role.
  • Once on the Select trusted entity page, highlight and select AWS service, under the Trusted entity type section.
  • In the Use case section dropdown, type or select S3. Once selected, ensure that S3 is highlighted, and not S3 Batch Operations. Choose, Next.
  • From the Add permissions page, enter AmazonS3ReadOnlyAccess within Search area. Select the associated checkbox and choose Next.
  • Once on the Name, review, and create page, For Role name, enter PinpointSegmentImport. 
  • Scroll down and choose Create role.
  • From the navigation pane, and once again under Access management, choose Roles. Select the name of the role just created.
  • In the Trust relationships tab, choose Edit trust policy.
  • Paste the following JSON trust policy. Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
                "Service": "pinpoint.amazonaws.com"
            },
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "accountId"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
                }
            }
        }
    ]
}

Build the Lambda

You’ll need to create a Lambda function for S3 to trigger when Cohort Modeler drops its export files into the export bucket, as well as the connection to the Cohort Modeler export bucket to set up the trigger. The steps below will take you through the process.

Create the Lambda

Head to the Lambda service menu, and from Functions page, choose Create function. From there:

  • On the Create function page, select Author from scratch.
  • For Function Name, enter S3TriggerCohortIngest for consistency.
  • For Runtime choose Python 3.8
  • No other complex configuration options are needed, so leave the remaining options as default and click Create function.
  • In the Code tab, replace the sample code with the code below.
import json
import os
import uuid
import urllib

import boto3
from botocore.exceptions import ClientError

### S3TriggerCohortIngest

# We get activated once we're triggered by an S3 file getting Put.
# We then:
# - grab the file from S3 and ingest it.
# - negotiate with a DB of record (Dynamo in our test case) to pull the corresponding player data.
# - transform that record data into a format Pinpoint will interpret.
# - Save that CSV into a different S3 bucket, and
# - Instruct Pinpoint to ingest it as a Segment.


# save the CSV file to a random unique filename in S3
def save_s3_file(content):
    
    # generate a random uuid csv filename.
    fname = str(uuid.uuid4()) + ".csv"
    
    print("Saving data to file: " + fname)
    
    try:
        # grab the S3 bucket name
        s3_bucket_name = os.environ['S3BucketName']
        
        # Set up the S3 boto client
        s3 = boto3.resource('s3')
        
        # Lob the body into the object.
        object = s3.Object(s3_bucket_name, fname)
        object.put(Body=content)
        
        return fname
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't store file in S3: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to storage.')
        }
        
# Given a list of users, query the user dynamo db for their account info.
def query_dynamo(userlist):
    
    # set up the dynamo client.
    ddb_client = boto3.resource('dynamodb')
    
    # Set up the RequestIems object for our query.
    batch_keys = {
        'userdata': {
            'Keys': [{'playerId': user} for user in userlist]
        }
    }

    # query for the keys. note: currently no explicit error-checking for <= 100 items.     
    try:        
 
        db_response = ddb_client.batch_get_item(RequestItems=batch_keys)
 
 
     
        return db_response
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't access data in DynamoDB: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed access to db.')
        }
        
def ingest_pinpoint(filename):
    
    s3url = "s3://" + os.environ.get('S3BucketName') + "/" + filename
    
    
    try:
        pinClient = boto3.client('pinpoint')
        
        response = pinClient.create_import_job(
            ApplicationId=os.environ.get('PinpointApplicationID'),
            ImportJobRequest={
                'DefineSegment': True,
                'Format': 'CSV',
                'RegisterEndpoints': True,
                'RoleArn': 'arn:aws:iam::744969268958:role/PinpointSegmentImport',
                'S3Url': s3url,
                'SegmentName': filename
            }
        )
        
        return {
            'ImportId': response['ImportJobResponse']['Id'],
            'SegmentId': response['ImportJobResponse']['Definition']['SegmentId'],
            'ExternalId': response['ImportJobResponse']['Definition']['ExternalId'],
        }
        
    # If we fail, say why and exit.
    except ClientError as error:
        print("Couldn't create Import job for Pinpoint: %s", json.dumps(error.response))
        return {
            'statuscode': 500,
            'body': json.dumps('Failed segment import to Pinpoint.')
        }
        
# Lambda entry point GO
def lambda_handler(event, context):
    
    # Get the bucket + obj name from the incoming event
    incoming_bucket = event['Records'][0]['s3']['bucket']['name']
    filename = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
    
    # light up the S3 client
    s3 = boto3.resource('s3')
    
    # grab the file that triggered us
    try:
        content_object = s3.Object(incoming_bucket, filename)
        file_content = content_object.get()['Body'].read().decode('utf-8')
        
        # and turn it into JSON.
        json_content = json.loads(file_content)
        
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(filename, incoming_bucket))
        raise e

    # Munge the file we got into something we can actually use
    record_content = json.dumps(json_content)

    # load it into json
    record_json = json.loads(record_content)
    
    # Initialize an empty list for names
    namelist = []
    
    # Iterate through the records in the list
    for record in record_json:
        # Check if "playerId" key exists in the record
        if "playerId" in record:
            # Append the first element of "playerId" list to namelist
            namelist.append(record["playerId"][0])

    # use the name list and grab the corresponding users from the dynamo table
    userdatalist = query_dynamo(namelist)
    
    # grab just what we need to create our import file
    userdata_responses = userdatalist["Responses"]["userdata"]
    
    csvlist = "ChannelType,Address,User.UserId,User.UserAttributes.FirstName,User.UserAttributes.LastName\n"
    
    for user in userdata_responses:
        newString = "EMAIL," + user["email"] + "," + user["playerId"] + "," + user["firstName"] + "," + user["lastName"] + "\n"
        csvlist += newString
        
    # Dump it to S3 with a unique filename. 
    csvFile = save_s3_file(csvlist)

    # and tell Pinpoint to import it as a Segment.
    pinResponse = ingest_pinpoint(csvFile)
    
    return {
        'statusCode': 200,
        'body': json.dumps(pinResponse)
    }
Configure the Lambda

Firstly, you’ll need to raise the function timeout, because sometimes it will take time to import large Pinpoint segments. To do so, navigate to the Configuration tab, then General configuration and change the Timeout value to the maximum of 15 minutes.

Next, select Environment variables beneath General configuration in the navigation pane. Choose Edit, followed by Add environment variable, for each Key and Value below.

  • Create a key – DynamoUserTableName – and give it the name of the DynamoDB table you built in the previous step. (If following our recommendations, it would be userdata. )
  • Create a key – PinpointApplicationID – and give it the Project ID (not the name), of the Pinpoint Project you created in the first step.
  • Create a key – S3BucketName – and give it the name of the Pinpoint Import S3 Bucket.
  • Finally, create a key – PinpointS3RoleARN – and paste the ARN of the Pinpoint S3 role you created during the Import Bucket creation step.
  • Once all Environment Variables are entered, choose Save.

In a production build, you could have this information stored in System Manager Parameter Store, in order to ensure portability and resilience.

While still in the Configuration tab, from the navigation pane, choose the Permissions menu option.

  • Note that just beneath Execution role, AWS has created an IAM Role for the Lambda. Select the role’s name to view it in the IAM console.
  • On the Role’s page, in the Permissions tab and within the Permissions policies section, you should see one policy attached to the role: AWSLambdaBasicExecutionRole
  • You will need to give the Lambda access to your Pinpoint import bucket, so highlight the Policy name and select the Add permissions dropdown and choose Create inline policy – we won’t be needing this role anywhere else.
  • On the next screen, click the JSON tab.
    • Paste the following IAM Policy JSON:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-PINPOINT-BUCKET-NAME-HERE",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE/*",
                "arn:aws:s3:::YOUR-CM-BUCKET-NAME-HERE"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "dynamodb:BatchGetItem",
            "Resource": "arn:aws:dynamodb:region:accountId:table/userdata"
        },
        {
            "Effect": "Allow",
            "Action": "mobiletargeting:CreateImportJob",
            "Resource": "arn:aws:mobiletargeting:region:accountId:apps/application-id"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::accountId:role/PinpointSegmentImport"
        }
    ]
}
    • Replace the placeholder YOUR-CM-BUCKET-NAME-HERE with the name of the S3 Bucket you created in the previous blog post to store, and the YOUR-PINPOINT-BUCKET-NAME-HERE with the bucket to store Amazon Pinpoint segment endpoint you created earlier in the blog.
    • Remember to replace accountId, region and application-id with your AWS account ID, the region you’re running Amazon Pinpoint from, and the Amazon Pinpoint project ID respectively.
    • Choose Review Policy.
    • Give the policy a name – we used S3TriggerCohortIngestPolicy.
    • Finally, choose Create Policy.
Trigger the Lambda via S3

The goal is for the Lambda to be triggered when Cohort Modeler drops the extract file into its designated S3 delivery bucket. Fortunately, setting this up is a simple process:

  • Navigate back to the Lambda Functions page. For this particular Lambda script S3TriggerCohortIngest, choose the + Add trigger from the Function overview section.
    • From the Trigger configuration dropdown, select S3 as the source.
    • Under Bucket, enter or select the bucket you’ve chosen for Cohort Modeler extract delivery. (Created in the previous blog.)
    • Leave Event type as “All object create events
    • Leave both Prefix and Suffix blank.
    • Check the box that acknowledges that using the same bucket for input and output is not recommended, as it can increase Lambda usage and thereby costs.
    • Finally, choose Add.
    • Lambda will add the appropriate permissions to invoke the trigger when an object is created in the S3 bucket.
Test the Lambda

The best way to test the end to end process is to simply connect to the endpoint you created in the first step of the process and send it a valid query. I personally use Postman, but you can use curl or any other HTTP tool to send the request.

Again, refer back to your prior work to determine the HTTP API endpoint for your Cohort Modeler’s cohort extract endpoint, and then send it the following query:

https://YOUR-ENDPOINT.execute-api.YOUR-REGION.amazonaws.com/Prod/data/cohort/ea_atrisk?threshold=2

You should receive back a response that looks something like this:

{'statusCode': 200, 'body': 'export/ea_atrisk_2_2023-09-12_13-57-06.json'}

The Status code confirms that the request was successful, and the body provides the name of the export file which was created.

  • From the AWS console, navigate to the S3 Dashboard, and select the S3 Bucket you assigned to Cohort Modeler exports. You should see a JSON file corresponding to the response from your API call in that bucket.
  • Still in S3, navigate back and select the S3 bucket you assigned as your Pinpoint Import bucket. You should find a CSV file with the same file prefix in that bucket.
  • Finally, navigate to the Pinpoint dashboard and choose your Project.
  • From the navigation pane, select Segments. You should see a segment name which directly corresponds to the CSV file which you located in the Pinpoint Import bucket.

If these three steps are complete, then the outbound arm of the Community Engagement Flywheel is functional. All that’s left now is to test the Segment by using it in a Campaign.

Create an email template

In order to send your message recipients a message, you’ll need a message template. In this section, we’ll walk you through this process. The Pinpoint Template Editor is a simple HTML editor, but other third-party services like visual designers, can integrate directly with Pinpoint to provide a seamless integration between the design tool and Pinpoint.

  • From the navigation pane of the Pinpoint console, choose Message templates, and then select Create template.
  • Leave the Channel set to Email, and under Template name, enter a unique and memorable name.
  • Under Subject – We entered and used ‘Happy Video Game Day!’, but enter and use whatever you would like.
  • Locate and copy the contents of EmailTemplate.html, and paste the contents into the Message section of the form.
  • Finally, choose Create, and your Template will now be available for use.

Create & Send the Pinpoint Campaign

For the final step, you will create and send a campaign to the endpoints included in the Segment that the Community Engagement Flywheel created. Earlier, you mapped three email addresses to the identities that Cohort Modeler generated for your query: your email, and two test emails from the SES Email Simulator. As a result, you should receive one email to the email address you selected when you’ve completed this process, as well as events which indicate the status of all campaign activities.

  • In the navigation bar of the Pinpoint console, choose All projects, and select the project you’ve created for this exercise.
  • From the navigation pane, choose Campaigns, and then Create a campaign at the top of the page.
  • On the Create a campaign page, give your campaign a name, highlight Standard campaign, and choose Email for the Channel. To proceed, choose Next.
  • On the Choose a segment page, highlight Use an existing segment, and from the Segment dropdown, select the segment .csv that was created earlier. Once selected, choose Next.
  • On the Create your message page, you have two tasks:
    • You’re going to use the email template you created in the prior step, so in the Email template section, under Template name, select Choose a template, followed by the template you created, and finally Choose template.
    • In the Email settings section, ensure you’ve selected the sender email address you verified previously when you initially created the Pinpoint project.
    • Choose Next.
  • On the Choose when to send the campaign page, ensure Immediately is highlighted for when you want the campaign to be sent. Scroll down and choose Next.
  • Finally, on the Review and launch page, verify your selections as you scroll down the page, and finally Launch campaign.

Check your inbox! You will shortly receive the email, and this confirms the Campaign has been successfully sent.

Conclusion

So far you’ve extended the Cohort Modeler to report on the cohorts it’s built for you, you’ve operated on that extract and built an ETL machine to turn that cohort into relevant contact and personalization data, you’ve imported the contact data into Pinpoint as a static Segment, and you’ve created a Pinpoint Campaign witih that Segment to send messaging to that Cohort.

In the next and final blog post, we’ll show how to respond to events that result from your cohort members interacting with the messaging they’ve been sent, and how to enrich the cohort data with those events so you can understand in deeper detail how your messaging works – or doesn’t work – with your cohort members.

Related Content

About the Authors

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen

Tristan (Tri) Nguyen is a Senior Specialist Solutions Architect at AWS. With deep expertise in data science, martech and customer data platforms, Tristan specialises in utilising Machine Learning and Generative AI to craft scalable customer engagement strategies and architectural solutions for customer across the Asia Pacific Region. He holds a Master of Computer Science from the Georgia Institute of Technology, and has extensive hands-on experience with AWS technologies, achieving all 12 AWS certifications. In his spare time, Tristan dabbles in triathlon, hiking big mountains and climbing big rocks.

Brett Ezell

Brett Ezell

Brett Ezell is an Amazon Pinpoint and Amazon Simple Email Service Specialist Solutions Architect at AWS. As a Navy veteran, he joined AWS in 2020 through an AWS technical military apprenticeship program. When he isn’t deep diving into solutions for customer challenges, Brett spends his time collecting vinyl, attending live music, and training at the gym. An admitted comic book nerd, he feeds his addiction every Wednesday by combing through his local shop for new books.