AWS Big Data Blog

Securely Analyze Data from Another AWS Account with EMRFS

Sometimes, data to be analyzed is spread across buckets owned by different accounts. In order to ensure data security, appropriate credentials management needs to be in place. This is especially true for large enterprises storing data in different Amazon S3 buckets for different departments. For example, a customer service department may need access to data owned by the research department, but the research department needs to provide that access in a secure manner.

This aspect of securing the data can become quite complicated. Amazon EMR uses an integrated mechanism to supply user credentials for access to data stored in S3. When you use an application (Hive, Spark, etc.) on EMR to read or write a file to or from an S3 bucket, the S3 API call needs to be signed by proper credentials to be authenticated.

Usually, these credentials are provided by the EC2 instance profile that you specify during cluster launch. What if the EC2 instance profile credentials are not enough to access an S3 object, because that object requires a different set of credentials?

This post shows how you can use a custom credentials provider to access S3 objects that cannot be accessed by the default credentials provider of EMRFS.

EMRFS and EC2 instance profiles

When an EMR cluster is launched, it needs an IAM role to be specified as the Amazon EC2 instance profile. An instance profile is a container that is used to pass the permissions contained in an IAM role when the EC2 instance is starting up. The IAM role essentially defines the permissions for anyone who assumes the role.

In the case of EMR, the IAM role contained in the instance profile has permissions to access other AWS services such as Amazon S3, Amazon CloudWatch, Amazon Kinesis, etc. This role obtains temporary credentials via the EC2 instance metadata service and provides them to the application that needs to access other AWS services.

For example, when a Hive application on EMR needs to read input data from an S3 bucket (where the S3 bucket path is specified by the s3:// URI), it invokes a default credentials provider function of EMRFS. The provider in turn obtains the temporary credentials from the EC2 instance profile and uses those credentials to sign the S3 GET request.

Custom credentials providers

In certain cases, the credentials obtained by the default credentials provider might not be enough to sign requests to an S3 bucket to which your IAM user does not have permissions to access. Maybe the bucket has a different owner, or restrictive bucket policies that allow access only to a specific IAM user or role.

In situations like this, you have other options that allow your IAM user to access the data. You could modify the S3 bucket policy to allow access to your IAM user but this might be a security risk. A better option is to implement a custom credentials provider for EMRFS to ensure that your S3 requests are signed by the correct credentials. A custom credentials provider ensures that only a configured EMR cluster has access to the data in S3. It provides much better control over who can access the data.

Configuring a custom credentials provider for EMRFS

Create a credentials provider by implementing both the AWSCredentialsProvider (from the AWS Java SDK) and the Hadoop Configurable classes for use with EMRFS when it makes calls to Amazon S3.

Each implementation of AWSCredentialsProvider can choose its own strategy for loading credentials depending on the use case. You can either load credentials using the AWS STS AssumeRole API action or from a Java properties file if you would like to make API calls using the credentials of a specific IAM user. Then, package your custom credentials provider in a JAR file, upload the JAR file to your EMR cluster, and specify the class name by setting fs.s3.customAWSCredentialsProvider in the emrfs-site configuration classification.

Walkthrough

Update: It is now possible to use the EMR EC2 instance profile to assume a role in order to access S3 bucket in a different account. So as an alternative in the following walkthrough, you can replace the IAM user “data_analyst” with the EMR EC2 instance profile and use the InstanceProfileCredentialsProvider class of AWS Java SDK to obtain temporary credentials from the EC2 instance profile. And then use those credentials to call the STS AssumeRole request to access an S3 bucket in another account.

Suppose you would like to analyze data stored in an S3 bucket owned by the research department of your company which has its own AWS account. You can launch an EMR cluster in your account and leverage EMRFS to access data stored in the bucket owned by the research department.

For this example, the two accounts of your company are:

  • Research: research@yourcompany.com (Account ID: 123456789012)
  • Your department: aws@yourcompany.com (Account ID: 111222333444)

Also, you have an IAM user called “data_analyst” in aws@yourcompany.com. This user should be granted cross-account access to read data from the bucket called “research-data” in research@mycompany.com. In order to enable cross-account access, follow these procedures:

  • Configure IAM inside account research@mycompany.com
  • Configure IAM inside account aws@mycompany.com

Configure IAM inside account research@mycompany.com

  1. Sign in to the IAM console.
  2. Choose Roles, Create New Role
  3. Enter a name for the role, such as “demo-role”
  4. Expand the Role for Cross-Account Access section and select role type Provide access between AWS accounts you own.
  5. Add aws@yourcompany.com as the account from which IAM users can access research@yourcompany.com. This can be done by specifying the AWS account ID for aws@yourcompany.com. The account ID can be obtained from the My Account page in the AWS Management Console.
  6. On the Attach Policy page, choose Next Step
  7. Review the details and choose Create Role
  8. In the left navigation, choose Policies, Create Policy
  9. Choose Create Your Own Policy and name the policy as demo-role-policy, where the policy document is:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": [
                    "arn:aws:s3:::research-data",
                    "arn:aws:s3:::research-data/*"
                ]
            }
        ]
    }

 

  1. In the left navigation pane, choose Roles page, open the demo-role role, and choose Attach Policy. From the list of displayed policies, open demo-role-policy (which you just created in Step 9) and attach the policy
  2. To configure the trust relationship for the role, choose Trust Relationships, Edit Trust Relationship
  3. On the Edit Trust Relationship page, the policy document should specify the IAM user data_analyst who can assume this role
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam:: 111222333444:data_analyst"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }

Configure IAM inside account aws@yourcompany.com

Attach an IAM policy to the user “data_analyst” which allows the user to assume the IAM role “demo-role” created in the account research@mycompany.com.

  1. Sign in to the IAM console
  2. Choose Policies, Create Policy
  3. Choose Create Your Own Policy and name the policy as “data-analyst-policy”, where the policy document is:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "sts:AssumeRole",
                "Resource": "arn:aws:iam:: 123456789012:role/demo-role"
            }
        ]
    }
  1. Choose Create Policy
  2. Select the policy just created, choose Policy actions, Attach
  3. For Principal entity, select the user “data_analyst” and choose Attach policy

Implement the custom credential provider

After the IAM configurations are finished, you implement a custom credentials provider to enable the EMR cluster to access objects stored in the S3 bucket “research-data”.

The following is sample code for the custom credentials provider that reads the IAM user data_analyst credentials from a Java properties file. If the bucket URI points to a bucket in the research@mycompany.com account, the provider then assumes the role to obtain temporary credentials. On the other hand, if the bucket URI points to a bucket in the user’s own account, the credentials are read from the EC2 instance profile for the EMR cluster.

import java.io.IOException;
import java.net.URI;
import java.util.Date;
import java.util.logging.Logger;
import java.lang.String;
import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.auth.InstanceProfileCredentialsProvider;
import com.amazonaws.auth.PropertiesCredentials;
import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient;
import com.amazonaws.services.securitytoken.model.Credentials;
import com.amazonaws.services.securitytoken.model.AssumeRoleRequest;
import com.amazonaws.services.securitytoken.model.AssumeRoleResult;


final class MyAWSCredentialsProviderWithUri implements AWSCredentialsProvider, Configurable {

    private Configuration configuration;
    private AWSCredentials credentials, iamUserCredentials;
    private static final String role_arn =
            "arn:aws:iam::123456789012:role/demo-role";

    //Specifying the S3 bucket URI for the other account
    private static final String bucket_URI = "s3://research-data/";

    private URI uri;
    private static InstanceProfileCredentialsProvider creds;
    private static Credentials stsCredentials;

    public MyAWSCredentialsProviderWithUri(URI uri, Configuration conf) {
        this.configuration = conf;
        this.uri = uri;
    }

    @Override
    public AWSCredentials getCredentials() {
        //Returning the credentials to EMRFS to make S3 API calls
        if (uri.toString().startsWith(bucket_URI)) {
            if (stsCredentials == null ||
                    (stsCredentials.getExpiration().getTime() - System.currentTimeMillis() < 60000)) {
                try {
                    //Reading the credentials of the IAM users from Java properties file
                    iamUserCredentials = new PropertiesCredentials
                    (MyAWSCredentialsProviderWithUri.class.getResourceAsStream("Credentials.properties"));
                } catch (IOException e) {
                    e.printStackTrace();
                }
                AWSSecurityTokenServiceClient   stsClient = new
                            AWSSecurityTokenServiceClient(iamUserCredentials);
                //Assuming the role in the other account to obtain temporary credentials
                AssumeRoleRequest assumeRequest = new AssumeRoleRequest()
                .withRoleArn(role_arn)
                .withDurationSeconds(3600)
                .withRoleSessionName("demo");
                AssumeRoleResult assumeResult = stsClient.assumeRole(assumeRequest);
                stsCredentials = assumeResult.getCredentials();
            }
            BasicSessionCredentials temporaryCredentials =
                    new BasicSessionCredentials(
                            stsCredentials.getAccessKeyId(),
                            stsCredentials.getSecretAccessKey(),
                            stsCredentials.getSessionToken());
            credentials = temporaryCredentials;

        } else {
            //Extracting the credentials from EC2 metadata service
            Boolean refreshCredentialsAsync = true;
            if (creds == null) {
                creds = new InstanceProfileCredentialsProvider
                    (refreshCredentialsAsync);
            }
            credentials = creds.getCredentials();
        }
        return credentials;
    }

    @Override
    public void refresh() {}

    @Override
    public void setConf(Configuration conf) {
    }

    @Override
    public Configuration getConf() {
        return configuration;
    }
}

As you might have noticed in the above code you are using the file “Credentials.properties” to read the IAM user data_analyst credentials, where the contents of that file are:

accessKey=AKIAxxxxxxxxxxxxxxxx
secretKey=iXbXxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Now, compile and package the code in to a JAR file to be used by your EMR cluster for cross-account S3 access. Use the following steps:

  1. SSH in to the master node of a running EMR cluster, navigate to the home directory of the Hadoop user, create a folder called MyAWSCredentialsProvider, and add the following files in the folder:
  • MyAWSCredentialsProviderWithUri.java
  • Credentials.properties
  1. In the folder, execute the following commands to compile and package the Java code in to a JAR file:
    $ javac -cp $(hadoop classpath) MyAWSCredentialsProviderWithUri.java
    $ jar -cvf MyAWSCredentialsProviderWithUri.jar MyAWSCredentialsProviderWithUri.class Credentials.properties
  1. Upload the generated JAR file to your S3 bucket:
    $ aws s3 cp MyAWSCredentialsProviderWithUri.jar s3://<YOUR-BUCKET>/ 

After you have executed the steps above, create a bootstrap action script (configure_emrfs_lib.sh) whose contents are as follows:

sudo aws s3 cp s3://<YOUR-BUCKET>/MyAWSCredentialsProviderWithUri.jar /usr/share/aws/emr/emrfs/auxlib/

Now, launch a new EMR cluster. Specify the full class name of the credentials provider by setting fs.s3.customAWSCredentialsProvider in the emrfs-site configuration classification. Add a custom bootstrap action (configure_emrfs_lib.sh), which copies the JAR file to the auxiliary library folder of EMRFS on all nodes of the cluster.

Using AWS CLI, run the following command to launch an EMR cluster configured with a custom credentials provider. The configuration also includes a bootstrap action to place your custom JAR file in the relevant directory.

aws emr create-cluster --applications Name=Hive --bootstrap-actions '[{"Path":"s3://<YOUR-BUCKET>/configure_emrfs_lib.sh","Name":"Custom action"}]' --ec2-attributes '{"KeyName":"<YOUR-KEY-PAIR>","InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-xxxxxxxx","EmrManagedSlaveSecurityGroup":"sg-xxxxxxxx","EmrManagedMasterSecurityGroup":"sg-xxxxxxxx"}' --service-role EMR_DefaultRole --enable-debugging --release-label emr-5.4.0 --log-uri 's3n://<YOUR-BUCKET/' --name 'test-awscredentialsprovider-emrfs' --instance-type=m3.xlarge --instance-count 3  --configurations '[{"Classification":"emrfs-site","Properties":{"fs.s3.customAWSCredentialsProvider":"MyAWSCredentialsProviderWithUri"},"Configurations":[]}]'

Now that you have a cluster running with the capability to access data in the research account, you can use various data processing applications available on EMR such as Hive, Pig, Spark or big data processing model like MapReduce on YARN to analyze the data.

There are some applications on EMR ­— like Presto and Oozie — that do not use EMRFS to interact with S3, so you will not be able to use these applications in this scenario.

Summary

In this post, I explained how EMRFS obtains credentials to sign API calls to S3. I covered how you can implement a custom credentials provider for EMRFS to access objects in an S3 bucket that otherwise could not be accessed using the default credentials provider. I also demonstrated how to configure cross-account S3 API access and use EMRFS to provide custom credentials. This enables your big data applications running on EMR to access data stored in an S3 bucket belonging to a different account.

For more information about using the EMRFS custom credentials provider, see Create an AWSCredentialsProvider for EMRFS.

If you have questions or suggestions, please leave a comment below.

About the Author

Jigar Mistry is a Hadoop Systems Engineer with Amazon Web Services. He works with customers to provide them architectural guidance and technical support for processing large datasets in the cloud using open-source applications. In his spare time, he enjoys going for camping and exploring different restaurants in the Seattle area.

 

 

Related

Encrypt and Decrypt Amazon Kinesis Records Using AWS KMS