AWS Machine Learning Blog

Large-scale feature engineering with sensitive data protection using AWS Glue interactive sessions and Amazon SageMaker Studio

Organizations are using machine learning (ML) and AI services to enhance customer experience, reduce operational cost, and unlock new possibilities to improve business outcomes. Data underpins ML and AI use cases and is a strategic asset to an organization. As data is growing at an exponential rate, organizations are looking to set up an integrated, cost-effective, and performant data platform in order to preprocess data, perform feature engineering, and build, train, and operationalize ML models at scale. To achieve that, AWS offers a unified modern data platform that is powered by Amazon Simple Storage Service (Amazon S3) as the data lake with purpose-built tools and processing engines to support analytics and ML workloads. For a unified ML experience, you can use Amazon SageMaker Studio, which offers native integration with AWS Glue interactive sessions to perform feature engineering at scale with sensitive data protection. In this post, we demonstrate how to implement this solution.

Amazon SageMaker is a fully managed ML service that enables you to build, train, and deploy models at scale for a wide range of use cases. For model training, you can use any of the built-in algorithms within SageMaker to get started on training and deploying ML models quickly.

A key component of the model building and development process is feature engineering. AWS Glue is one of the recommended options to achieve feature engineering at scale. AWS Glue enables you to run data integration and transformation in a distributed fashion on a serverless Apache Spark infrastructure, and makes it easy to use the popular Spark ML library for feature engineering and model development. In addition, you can use AWS Glue for incremental data processing through job bookmarks, ingest data from over 100 sources using connectors, and run spiky or unpredictable workloads using auto scaling.

Another important requirement for ML-based applications is data security and access control. It’s common demand to have tighter control on who can access the most sensitive data as part of the feature engineering and model building process by following the principal of least privilege access. To achieve this, you can utilize the AWS Glue integration with AWS Lake Formation for increased governance and management of data lake assets. With Lake Formation, you can configure fine-grained data access control and security policies on top of your Amazon S3 data lake. The policies are defined in a central location, allowing multiple analytics and ML services, such as AWS Glue, Amazon Athena, and SageMaker, to interact with data stored in Amazon S3.

AWS Glue includes a personally identifiable information (PII) detection transform that provides the ability to detect, mask, or remove entities as required, for increased compliance and governance. With the PII transform, you can detect PII data in datasets and automatically apply fine-grained access control using Lake Formation to restrict sensitive data for different user groups.

Use case

We focus on a propensity model use case that includes a customer marketing dataset and involves two user personas: a data engineer and data scientist. The dataset contains per-customer information, including lead source, contact notes, job role, some flags, page views per visit, and more. The dataset also includes sensitive information like personal phone numbers.

The data engineer is responsible for building the end-to-end data processing pipeline, including data preparation, preprocessing, and access control. The data scientist is responsible for feature engineering, and training and deploying the ML model. Note that the data scientist is not allowed to access any PII sensitive data for feature engineering or training the ML model.

As part of this use case, the data engineer builds a data pipeline to preprocess the dataset, scans the dataset for any PII information, and restricts the access of the PII column to the data scientist user. As a result, when a data scientist uses the dataset to perform feature engineering and build ML models, they don’t have access to the PII sensitive column (phone numbers, in this case). The feature engineering process involves converting columns of type string to a format that is optimal for ML models. As an advanced use case, you can extend this access pattern to implement row-level and cell-level security using Lake Formation.

Solution overview

The solution contains the following high-level steps:

  1. Set up resources with AWS CloudFormation.
  2. Preprocess the dataset, including PII detection and fine-grained access control, on an AWS Glue interactive session.
  3. Perform feature engineering on an AWS Glue interactive session.
  4. Train and deploy an ML model using the SageMaker built-in XGBoost algorithm.
  5. Evaluate the ML model.

The following diagram illustrates the solution architecture.

Architecture diagram

Prerequisites

To complete this tutorial, you must have the following prerequisites:

Set up resources with AWS CloudFormation

This post includes a CloudFormation template for a quick setup. You can review and customize it to suit your needs. If you prefer setting up resources on the AWS Management Console and the AWS CLI rather than AWS CloudFormation, see the instructions in the appendix at the end of this post.

The CloudFormation template generates the following resources:

  • S3 buckets with a sample dataset
  • An AWS Lambda function to load the dataset
  • AWS Identity and Access Management (IAM) group, users, roles, and policies
  • Lake Formation data lake settings and permissions
  • SageMaker user profiles

To create your resources, complete the following steps:

  1. Sign in to the console.
  2. Choose Launch Stack:
    Launch button
  3. Choose Next.
  4. For DataEngineerPwd and DataScientistPwd, enter your own password for the data engineer and data scientist users.
  5. For GlueDatabaseName, enter demo.
  6. For GlueTableName, enter web_marketing.
  7. For S3BucketNameForInput, enter blog-studio-pii-dataset-<your-aws-account-id>.
  8. For S3BucketNameForOutput, enter blog-studio-output-<your-aws-account-id>.
  9. For SageMakerDomainId, enter your SageMaker domain ID that you prepared in the prerequisite steps.
  10. Choose Next.
  11. On the next page, choose Next.
  12. Review the details on the final page and select I acknowledge that AWS CloudFormation might create IAM resources.
  13. Choose Create.

Stack creation can take up to 10 minutes. The stack creates IAM roles and SageMaker user profiles for two personas: data engineer and data scientist. It also creates a database demo and table web_marketing with a sample dataset.

At the time of stack creation, the data engineer persona has complete access to the table, but the data scientist persona doesn’t have any access to the table yet.

Preprocess the dataset

Let’s start preprocessing data on an AWS Glue interactive session. The data engineer persona wants to verify the data to see if there is sensitive data or not, and grant minimal access permission to the data scientist persona. You can download notebook from this location.

  1. Sign in to the console using the data-engineer user.
  2. On the SageMaker console, choose Users.
  3. Select the data-engineer user and choose Open Studio.
  4. Create a new notebook and choose SparkAnalytics 1.0 for Image and Glue PySpark for Kernel.
  5. Start an interactive session with the following magic to install the newer version of Boto3 (this is required for using the create_data_cells_filter method):
    %additional_python_modules boto3==1.24.82
  6. Initialize the session:
    import boto3
    import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job
    
    sc = SparkContext.getOrCreate()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
  7. Create an AWS Glue DynamicFrame from the newly created table, and resolve choice types based on catalog schema, because we want to use the schema defined in the catalog instead of the automatically inferred schema based on data:
    dyf_marketing = glueContext.create_dynamic_frame.from_catalog(
    database="demo",
    table_name="web_marketing"
    )
    
    dyf_marketing_resolved = dyf_marketing.resolveChoice(
    choice="match_catalog",
    database="demo",
    table_name="web_marketing"
    )
    
    dyf_marketing_resolved.printSchema()
  8. Validate in the table whether there is any PII data using AWS Glue PII detection:
    from awsglueml.transforms import EntityDetector
    
    entities_filter = [
    "EMAIL",
    "CREDIT_CARD",
    "IP_ADDRESS",
    "MAC_ADDRESS",
    "PHONE_NUMBER"
    ]
    entity_detector = EntityDetector()
    classified_map = entity_detector.classify_columns(dyf_marketing_resolved, entities_filter, 1.0, 0.1)
    print(classified_map)
  9. Verify whether the columns classified as PII contain sensitive data or not (if not, update classified_map to drop the non-sensitive columns):
    from pyspark.sql.functions import col
    dyf_marketing_resolved.toDF().select(*[col(c) for c in classified_map.keys()]).show()
  10. Set up Lake Formation permissions using a data cell filter for automatically detected columns, and restrict the columns to the data scientist persona:
    lakeformation = boto3.client('lakeformation')
    sts = boto3.client('sts')
    
    account_id = sts.get_caller_identity().get('Account')
    
    # Create a data cell filter for excluding phone_number column
    lakeformation.create_data_cells_filter(
    TableData={
    'TableCatalogId': account_id,
    'DatabaseName': 'demo',
    'TableName': 'web_marketing',
    'Name': 'pii',
    'RowFilter': {
    'AllRowsWildcard': {}
    
    },
    'ColumnWildcard': {
    'ExcludedColumnNames': list(classified_map.keys())
    }
    }
    )
    
    # Grant permission on the data cell filter
    lakeformation.grant_permissions(
    Principal={
    'DataLakePrincipalIdentifier': f'arn:aws:iam::{account_id}:role/SageMakerStudioExecutionRole_data-scientist'
    },
    Resource={
    'DataCellsFilter': {
    'TableCatalogId': account_id,
    'DatabaseName': 'demo',
    'TableName': 'web_marketing',
    'Name': 'pii'
    }
    },
    Permissions=[
    'SELECT'
    ]
    )
  11. Log in to Studio as data-scientist to see that the PII columns are not visible. You can download notebook from this location.
  12. Create a new notebook and choose SparkAnalytics 1.0 for Image and Glue PySpark for Kernel:
    import boto3
    import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job
    
    sc = SparkContext.getOrCreate()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
    
    dyf_marketing = glueContext.create_dynamic_frame.from_catalog(
    database="demo",
    table_name="web_marketing"
    )
    
    dyf_marketing.printSchema()

Perform feature engineering

We use the Apache Spark ML library to perform feature engineering as the data-scientist user and then write back the output to Amazon S3.

  1. In the following cell, we apply features from the Apache Spark ML library:
    • StringIndexer maps a string column of labels to a column of label indexes.
    • OneHotEncoder maps a categorical feature, represented as a label index, to a binary vector with at most a single one-value that indicates the presence of a specific categorical feature. This transform is used for ML algorithms that expect continuous features.
    • VectorAssembler is a transformer that combines a given list of columns into a single vector column, which is then used in training ML models for algorithms such as logistic regression and decision trees.
    #feature engineering by using string indexer and one hot encoding from spark ML library
    from pyspark.ml.feature import StringIndexer, VectorIndexer, OneHotEncoder, VectorAssembler
    from pyspark.ml import Pipeline
    
    cols = ['lastcampaignactivity','region','viewedadvertisement','usedpromo','jobrole']
    
    int_cols = ['pageviewspervisit','totaltimeonwebsite','totalwebvisits',]
    
    indexers = [
    StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c))
    for c in cols
    ]
    
    encoders = [
    OneHotEncoder(
    inputCol=indexer.getOutputCol(),
    outputCol="{0}_encoded".format(indexer.getOutputCol()))
    for indexer in indexers
    ]
    
    assembler = VectorAssembler(
    inputCols=[encoder.getOutputCol() for encoder in encoders]+int_cols,
    outputCol="features"
    )
  2. The final transformed DataFrame can be created using the Pipeline library. A pipeline is specified as a sequence of stages. These stages are run in order and the input DataFrame is transformed as it passes through each stage.
    df_marketing = dyf_marketing.toDF()
    pipeline = Pipeline(stages=indexers + encoders + [assembler])
    df_tfm=pipeline.fit(df_marketing).transform(df_marketing)
    
  3. Next, we split the dataset into train, validate, and test DataFrame and save it in the S3 bucket to train the ML model (provide your AWS account ID in the following code):
    from pyspark.ml.functions import vector_to_array
    
    #set s3 output location for feature engineering output
    bucket='blog-studio-output-<your-aws-account-id>'
    
    #convert sparse to dense vector
    df_tfm=df_tfm.select('converted',vector_to_array("features").alias("features_array"))
    
    #split features array into individual columns
    df_tfm=df_tfm.select([df_tfm.converted] + [df_tfm.features_array[i] for i in range(17)])
    
    #split the overall dataset into 70-20-10 training , validation and test
    (train_df, validation_df, test_df) = df_tfm.randomSplit([0.7,0.2,0.1])
    
    #write back train, validation and test datasets to S3
    train_df.write\
    .option("header","false")\
    .csv('s3://{}/web-marketing/processed/training/'.format(bucket))
    
    validation_df.write\
    .option("header","false")\
    .csv('s3://{}/web-marketing/processed/validation/'.format(bucket))
    
    test_df.write\
    .option("header","false")\
    .csv('s3://{}/web-marketing/processed/test/'.format(bucket))

Train and deploy an ML model

In the previous section, we completed feature engineering, which included converting string columns such as region, jobrole, and usedpromo into a format that is optimal for ML models. We also included columns such as pageviewspervisit and totalwebvisits, which will help us predict a customer’s propensity to buy a product.

We now train an ML model by reading the train and validation dataset using the SageMaker built-in XGBoost algorithm. Then we deploy the model and run an accuracy check. You can download notebook from this location.

In the following cell, we’re reading data from the second S3 bucket, which includes the output from our feature engineering operations. Then we use the built-in algorithm XGBoost to train the model.

  1. Open a new notebook. Choose Data Science for Image and Python 3 for Kernel (provide your AWS account ID in the following code):
    #set s3 bucket location for training data
    import sagemaker
    import boto3
    from sagemaker import get_execution_role
    
    container = sagemaker.image_uris.retrieve(region=boto3.Session().region_name,\
    framework='xgboost', version='latest')
    bucket='blog-studio-output-<your-aws-account-id>'
    prefix='web-marketing/processed'
    
    #read train and validation input datasets
    s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/training/'\
    .format(bucket, prefix), content_type='csv')
    s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/validation/'\
    .format(bucket, prefix), content_type='csv')
    
    #train xgb model
    sess = sagemaker.Session()
    from sagemaker import get_execution_role
    
    xgb = sagemaker.estimator.Estimator(
    container,
    role=get_execution_role(),
    instance_count=1,
    instance_type='ml.m4.xlarge',
    output_path='s3://{}/{}/output'\
    .format(bucket, prefix),
    sagemaker_session=sess
    )
    
    xgb.set_hyperparameters(
    max_depth=5,
    eta=0.2,
    gamma=4,
    min_child_weight=6,
    subsample=0.8,
    silent=0,
    objective='binary:logistic',
    num_round=100
    )
    
    xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
  2. When training is complete, we can deploy the model using SageMaker hosting services:
    #deploy ml model
    xgb_predictor = xgb.deploy(initial_instance_count=1,
    instance_type='ml.m4.xlarge')

Evaluate the ML model

We use the test dataset to evaluate the model and delete the inference endpoint when we’re done to avoid any ongoing charges.

  1. Evaluate the model with the following code:
    #create csv serialiser to run accuracy on test dataset
    xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()
    
    #read test dataset
    import io
    import pandas as pd
    
    s3 = boto3.resource('s3')
    bucket_obj = s3.Bucket(bucket)
    
    test_line = []
    test_objs = bucket_obj.objects.filter(Prefix="web-marketing/processed/test")
    for obj in test_objs:
    try:
    key = obj.key
    body = obj.get()['Body'].read()
    temp = pd.read_csv(io.BytesIO(body),header=None, encoding='utf8',sep=',')
    test_line.append(temp)
    except:
    continue
    
    test_df = pd.concat(test_line)
    
    #predict results using deployed model
    import numpy as np
    def predict(data, predictor, rows=500 ):
    split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
    predictions = ''
    for array in split_array:
    predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')])
    return np.fromstring(predictions[1:], sep=',')
    
    #drop the target variable in test_df and make prediction
    predictions = predict(test_df.drop(test_df.columns[0], axis=1).to_numpy(), xgb_predictor)
    
    #calculate accuracy using sklearn library
    from sklearn.metrics import accuracy_score, confusion_matrix
    y_pred=np.round(predictions)
    y_true=test_df.iloc[:,0].values.tolist()
    print('Accuracy score: ',accuracy_score(y_true, y_pred))
    print('Confusion matrix: \n',confusion_matrix(y_true, y_pred))

    The accuracy result for the sample run was 84.6 %. This could be slightly different for your run due to the random split of the dataset.

  2. We can delete the inference endpoint with the following code:
    xgb_predictor.delete_endpoint(delete_endpoint_config=True)

Clean up

Now to the final step, cleaning up the resources.

  1. Empty the two buckets created through the CloudFormation stack.
  2. Delete the apps associated with user profiles data-scientist and data-engineer within Studio.
  3. Delete the CloudFormation stack.

Conclusion

In this post, we demonstrated a solution that enables personas such as data engineers and data scientists to perform feature engineering at scale. With AWS Glue interactive sessions, you can easily achieve feature engineering at scale with automatic PII detection and fine-grained access control without needing to manage any underlying infrastructure. By using Studio as the single entry point, you can get a simplified and integrated experience to build an end-to-end ML workflow: from preparing and securing data to building, training, tuning, and deploying ML models. To learn more, visit Getting started with AWS Glue interactive sessions and Amazon SageMaker Studio.

We are very excited about this new capability and keen to see what you’re going to build with it!


Appendix: Set up resources via the console and the AWS CLI

Complete the instructions in this section to set up resources using the console and AWS CLI instead of the CloudFormation template.

Prerequisites

To complete this tutorial, you must have access to the AWS CLI (see Getting started with the AWS CLI) or use command line access from AWS CloudShell.

Configure IAM group, users, roles, and policies

In this section, we create two IAM users: data-engineer and data-scientist, which belong to the IAM group data-platform-group. Then we add a single IAM policy to the IAM group.

  1. On the IAM console, create a policy on the JSON tab to create a new IAM managed policy named DataPlatformGroupPolicy. The policy allows users in the group to access Studio, but only using a SageMaker user profile with a tag that matches their IAM user name. Use the following JSON policy document to provide permissions:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Action":[
                "sagemaker:DescribeDomain",
                "sagemaker:ListDomains",
                "sagemaker:ListUserProfiles",
                "sagemaker:ListApps"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerStudioReadOnly"
          },
          {
             "Action":"sagemaker:AddTags",
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerAddTags"
          },
          {
             "Condition":{
                "StringEquals":{
                   "sagemaker:ResourceTag/studiouserid":"${aws:username}"
                }
             },
             "Action":[
                "sagemaker:CreatePresignedDomainUrl",
                "sagemaker:DescribeUserProfile"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerAllowedUserProfile"
          },
          {
             "Condition":{
                "StringNotEquals":{
                   "sagemaker:ResourceTag/studiouserid":"${aws:username}"
                }
             },
             "Action":[
                "sagemaker:CreatePresignedDomainUrl",
                "sagemaker:DescribeUserProfile"
             ],
             "Resource":"*",
             "Effect":"Deny",
             "Sid":"AmazonSageMakerDeniedUserProfiles"
          }
       ]
    }
  2. Create an IAM group called data-platform-group.
  3. Search and attach the AWS managed policy named DataPlatformGroupPolicy to the group.
  4. Create IAM users called data-engineer and data-scientist under the IAM group data-platform-group.
  5. Create a new managed policy named SageMakerExecutionPolicy (provide your Region and account ID in the following code):
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Action":[
                "sagemaker:DescribeDomain",
                "sagemaker:ListDomains",
                "sagemaker:ListUserProfiles",
                "sagemaker:ListApps"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerStudioReadOnly"
          },
          {
             "Action":"sagemaker:AddTags",
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerAddTags"
          },
          {
             "Action":[
                "sagemaker:CreateTrainingJob",
                "sagemaker:DescribeTrainingJob",
                "logs:DescribeLogStreams",
                "sagemaker:CreateModel",
                "sagemaker:CreateEndpointConfig",
                "sagemaker:CreateEndpoint",
                "sagemaker:DescribeEndpoint",
                "sagemaker:InvokeEndpoint",
                "sagemaker:DeleteEndpointConfig",
                "sagemaker:DeleteEndpoint"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerTrainingAndDeploy"
          },
          {
             "Action":"sagemaker:*App",
             "Resource":"arn:aws:sagemaker:<aws region>:<account id>:app/*/${aws:PrincipalTag/userprofilename}/*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerAllowedApp"
          },
          {
             "Action":"sagemaker:*App",
             "Effect":"Deny",
             "NotResource":"arn:aws:sagemaker:<aws region>:<account id>:app/*/${aws:PrincipalTag/userprofilename}/*",
             "Sid":"AmazonSageMakerDeniedApps"
          },
          {
             "Action":[
                "glue:GetTable",
                "glue:GetTables",
                "glue:SearchTables",
                "glue:GetDatabase",
                "glue:GetDatabases",
                "glue:GetPartition",
                "glue:GetPartitions"
             ],
             "Resource":[
                "arn:aws:glue:<aws region>:<account id>:table/demo/*",
                "arn:aws:glue:<aws region>:<account id>:database/demo",
                "arn:aws:glue:<aws region>:<account id>:catalog"
             ],
             "Effect":"Allow",
             "Sid":"GlueCatalogPermissions"
          },
          {
             "Action":[
                "lakeformation:GetDataAccess",
                "lakeformation:StartQueryPlanning",
                "lakeformation:GetQueryState",
                "lakeformation:GetWorkUnits",
                "lakeformation:GetWorkUnitResults"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"LakeFormationPermissions"
          },
          {
             "Effect":"Allow",
             "Action":[
                "s3:CreateBucket",
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket",
                "s3:DeleteObject"
             ],
             "Resource":[
                "arn:aws:s3:::blog-studio-output-<account id>",
                "arn:aws:s3:::blog-studio-output-<account id>/*"
             ]
          },
          {
             "Action":[
                "iam:PassRole",
                "iam:GetRole",
                "sts:GetCallerIdentity"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"AmazonSageMakerStudioIAMPassRole"
          },
          {
             "Action":"sts:AssumeRole",
             "Resource":"*",
             "Effect":"Deny",
             "Sid":"DenyAssummingOtherIAMRoles"
          }
       ]
    }
  6. Create a new managed policy named SageMakerAdminPolicy:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Action":[
                "lakeformation:GrantPermissions",
                "lakeformation:RevokePermissions",
                "lakeformation:ListPermissions",
                "lakeformation:BatchGrantPermissions",
                "lakeformation:BatchRevokePermissions",
                "lakeformation:CreateDataCellsFilter",
                "lakeformation:DeleteDataCellsFilter",
                "lakeformation:ListDataCellsFilter",
                "glue:GetUserDefinedFunctions",
                "glue:BatchGetCustomEntityTypes"
             ],
             "Resource":"*",
             "Effect":"Allow",
             "Sid":"GlueLakeFormationPermissions"
          }
       ]
    }
  7. Create an IAM role for SageMaker for the data engineer (data-engineer), which is used as the corresponding user profile’s execution role. On the Attach permissions policy page, AmazonSageMakerFullAccess (AWS managed policy) is attached by default. You remove this policy later to maintain minimum privilege.
    1. For Role name, use the naming convention introduced at the beginning of this section to name the role SageMakerStudioExecutionRole_data-engineer.
    2. For Tags, add the key userprofilename and the value data-engineer.
    3. Choose Create role.
    4. To add the remaining policies, on the Roles page, choose the role name you just created.
    5. Under Permissions, remove the policy AmazonSageMakerFullAccess.
    6. On the Attach permissions policy page, select the AWS managed policy AwsGlueSessionUserRestrictedServiceRole, and the customer managed policies SageMakerExecutionPolicy and SageMakerAdminPolicy that you created.
    7. Choose Attach policies.
    8. Modify your role’s trust relationship:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "glue.amazonaws.com",
                   "sagemaker.amazonaws.com"
                ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }
  8. Create an IAM role for SageMaker for the data scientist (data-scientist), which is used as the corresponding user profile’s execution role.
    1. For Role name, name the role SageMakerStudioExecutionRole_data-scientist.
    2. For Tags, add the key userprofilename and the value data-scientist.
    3. Choose Create role.
    4. To add the remaining policies, on the Roles page, choose the role name you just created.
    5. Under Permissions, remove the policy AmazonSageMakerFullAccess.
    6. On the Attach permissions policy page, select the AWS managed policy AwsGlueSessionUserRestrictedServiceRole, and the customer managed policy SageMakerExecutionPolicy that you created.
    7. Choose Attach policies.
    8. Modify your role’s trust relationship:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "glue.amazonaws.com",
                   "sagemaker.amazonaws.com"
                ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

Configure SageMaker user profiles

To create your SageMaker user profiles with the studiouserid tag, complete the following steps:

  1. Use the AWS CLI or CloudShell to create the Studio user profile for the data engineer (provide your account ID and Studio domain ID in the following code):
    aws sagemaker create-user-profile --domain-id <domain id> --user-profile-name data-engineer --tags Key=studiouserid,Value=data-engineer --user-settings ExecutionRole=arn:aws:iam::<account id>:role/SageMakerStudioExecutionRole_data-engineer
  2. Repeat the step to create a user profile for the data scientist, replacing the account ID and Studio domain ID:
    aws sagemaker create-user-profile --domain-id <domain id> --user-profile-name data-scientist --tags Key=studiouserid,Value=data-scientist --user-settings ExecutionRole=arn:aws:iam::<account id>:role/SageMakerStudioExecutionRole_data-scientist

Create S3 buckets and upload the sample dataset

In this section, you create two S3 buckets. The first bucket has a sample dataset related to web marketing. The second bucket is used by the data scientist to store output from feature engineering tasks, and this output dataset is used to train the ML model.

First, create the S3 bucket for the input data:

  1. Download the dataset.
  2. On the Amazon S3 console, choose Buckets in the navigation pane.
  3. Choose Create bucket.
  4. For Region, choose the Region with the SageMaker domain that includes the user profiles you created.
  5. For Bucket name, enter blog-studio-pii-dataset-<your-aws-account-id>.
  6. Choose Create bucket.
  7. Select the bucket you created and choose Upload.
  8. In the Select files section, choose Add files and upload the dataset you downloaded.
    Now you create the bucket for the output data:
  9. On the Buckets page, choose Create bucket.
  10. For Region, choose the Region with the SageMaker domain that includes the user profiles you created.
  11. For Bucket name, enter blog-studio-output-<your-aws-account-id>.
  12. Choose Create bucket.

Create an AWS Glue database and table

In this section, you create an AWS Glue database and table for the dataset.

  1. On the Lake Formation console, under Data catalog in the navigation pane, choose Databases.
  2. Choose Add database.
  3. For Name, enter demo.
  4. Choose Create database.
  5. Under Data catalog, choose Tables.
  6. For Name, enter web_marketing.
  7. For Database, select demo.
  8. For Include path, enter the path of your S3 bucket for input data.
  9. For Classification, choose CSV.
  10. Under Schema, choose Upload Schema.
  11. Enter the following JSON array into the text box:
    [
       {
          "Name":"lastcampaignactivity",
          "Type":"string"
       },
       {
          "Name":"pageviewspervisit",
          "Type":"double"
       },
       {
          "Name":"totaltimeonwebsite",
          "Type":"bigint"
       },
       {
          "Name":"totalwebvisits",
          "Type":"bigint"
       },
       {
          "Name":"attendedmarketingevent",
          "Type":"string"
       },
       {
          "Name":"organicsearch",
          "Type":"string"
       },
       {
          "Name":"viewedadvertisement",
          "Type":"string"
       },
       {
          "Name":"leadsource",
          "Type":"string"
       },
       {
          "Name":"jobrole",
          "Type":"string"
       },
       {
          "Name":"contactnotes",
          "Type":"string"
       },
       {
          "Name":"leadprofile",
          "Type":"string"
       },
       {
          "Name":"usedpromo",
          "Type":"string"
       },
       {
          "Name":"donotreachout",
          "Type":"boolean"
       },
       {
          "Name":"city",
          "Type":"string"
       },
       {
          "Name":"converted",
          "Type":"bigint"
       },
       {
          "Name":"region",
          "Type":"string"
       },
       {
          "Name":"phone_number",
          "Type":"string"
       }
    ]
  12. Choose Upload.
  13. Choose Submit.
  14. Under Table details, choose Edit table.
  15. Under Table properties, choose Add.
  16. For Key, enter skip.header.line.count, and for Value, enter 1.
  17. Choose Save.

Configure Lake Formation permissions

In this section, you set up Lake Formation permissions to allow IAM role SageMakerStudioExecutionRole_data-engineer to create a database and register the S3 location within Lake Formation.

First, register the data lake location to manage tables under the location in Lake Formation permissions:

  1. Choose Data lake locations.
  2. Choose Register location.
  3. For Amazon S3 path, enter s3://blog-studio-pii-dataset-<your-aws-account-id>/ (the bucket that contains the dataset).
  4. Choose Register location.
    Now you grant Lake Formation database and table permissions to the IAM roles SageMakerStudioExecutionRole_data-engineer and SageMakerStudioExecutionRole_data-scientist.First, grant database permission for SageMakerStudioExecutionRole_data-engineer:
  5. Under Permissions, choose Data lake permissions.
  6. Under Data permission, choose Grant.
  7. For Principals, choose IAM users and roles, and select the role SageMakerStudioExecutionRole_data-engineer.
  8. For Policy tags or catalog resources, choose Named data catalog resources.
  9. For Databases, choose demo.
  10. For Database permissions, select Super.
  11. Choose Grant.
    Next, grant table permission for SageMakerStudioExecutionRole_data-engineer:
  12. Under Data permission, choose Grant.
  13. For Principals, choose IAM users and roles, and select the role SageMakerStudioExecutionRole_data-engineer.
  14. For Policy tags or catalog resources, choose Named data catalog resources.
  15. For Databases, choose demo.
  16. For Tables, choose web_marketing.
  17. For Table permissions, select Super.
  18. For Grantable permissions, select Super.
  19. Choose Grant.
    Finally, grant database permission for SageMakerStudioExecutionRole_data-scientist:
  20. Under Data permission, choose Grant.
  21. For Principals, choose IAM users and roles, and select the role SageMakerStudioExecutionRole_data-scientist.
  22. For Policy tags or catalog resources, choose Named data catalog resources.
  23. For Databases, choose demo.
  24. For Database permissions, select Describe.
  25. Choose Grant.

About the Authors

Praveen Kumar is an Analytics Solution Architect at AWS with expertise in designing, building, and implementing modern data and analytics platforms using cloud-native services. His areas of interests are serverless technology, modern cloud data warehouses, streaming, and ML applications.

Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He enjoys collaborating with different teams to deliver results like this post. In his spare time, he enjoys playing video games with his family.