AWS Storage Blog

Implementing least privilege access in an AWS Transfer Family workflow

Architecting secure data transfer workloads is critical for today’s businesses. Customers need to be sure that each end user can only access the minimally appropriate set of files and folders once authenticated to AWS Transfer Family. There are multiple Identity and Access Management (IAM) roles necessary when designing these authentication and access controls, and customers often find it difficult to design and manage least privilege access policies on scale for their Transfer Family implementations. A misconfigured or excessively permissive IAM policy increases the risk of allowing unwanted user access to sensitive data.

Implementing least privilege access at every layer of your workload helps mitigate this risk of overly permissive policies and roles. When a user implements least privilege, this means that they are only granting access needed to perform a specific action on a specific resource under a specific condition. IAM policies allow users to specify the service actions, resources, and conditions that must be true for AWS to allow or deny access on each request.

In this post, we explain how Transfer Family uses IAM roles and policies to securely transfer information. The concept of least privilege is applied to each IAM policy. This post illustrates this with a step-by-step example using a sample architecture that we deploy with an AWS CloudFormation template.

Let’s get started!

Solution overview and sample architecture

In this architecture, a Transfer Family user authenticates through Secure File Transfer Protocol (SFTP) and uploads a file. The file is put into an Amazon Simple Storage Service(S3) bucket called S3BucketNewRecords. Upon successful upload, a Transfer Family managed workflow is initiated. If the workflow fails at any point, then the exception handler is invoked and an AWS Lambda function notifies administrators that an upload has failed by publishing a notification to Amazon Simple Notification Service (Amazon SNS).

Figure 1 This figure is an illustration of the AWS Transfer Family workflow architecture. First, a user authenticates and uploads a file. Next, the file is stored in an Amazon S3 bucket called "New Records". Next, post-upload processing begins in a series of workflow steps. The first workflow step copies the file to a bucket called "Archived Records". Next, the file is tagged to confirm that it should be archived. Third, a custom step exists to run an AWS Lambda function which loads the comma separated value (CSV) file to an Amazon DynamoDB table. Lastly, the original file is deleted. There is an exception handler step as well, which is a custom step using AWS Lambda to notify administrators of a failed workflow via Amazon Simple Notification Service.

Workflow steps:

  1. The file is copied to an S3 bucket called S3BucketArchivedRecords.
  2. The archived object is tagged to match the S3 bucket’s archive condition.
  3. The custom workflow step invokes a Lambda function to put records from the CSV into an Amazon DynamoDB table.
  4. The workflow deletes the originally uploaded file and it is removed from the Transfer Family user’s folder.

Prerequisites

To follow along with this post, you must have an AWS account.

Solution walkthrough

For this solution, we demonstrate how to:

  1. Deploy AWS Serverless Application Model (SAM) and its associated CloudFormation template.
  2. Test the workload by uploading and processing a CSV file.
  3. Test how the solution handles exceptions by altering the permissions.

1. Deploy AWS Serverless Application Model (SAM) and its associated CloudFormation template

In this section, we create a AWS Cloud9 environment and use it to deploy a SAM application that creates the following resources:

  • Transfer Family server
  • Transfer Family user
  • Transfer Family managed workflow
  • Amazon SNS topic
  • DynamoDB table
  • Two Amazon S3 buckets
  • Two Lambda functions (for a custom workflow step and exception handling)
  • Five IAM roles (user access control, logging, managed workflows, and two custom workflow steps)

Create a Cloud9 environment

To deploy the SAM Template, we use the Cloud9 integrated development environment (IDE).

  1. Navigate to the Cloud9 console. Choose Create environment. Select the following settings:
    • Name:  aws-transfer-family-least-privilege-blog
    • Environment type: New EC2 instance
    • Instance type: t2.micro
    • Platform: Amazon Linux 2
    • Timeout: 30 minutes
    • Connection: AWS Systems Manager
  1. Choose Create. This brings you back to the Cloud9 environments page.
  2. Choose Open from the Cloud9 IDE column. It may take a few moments for environment creation to complete.

Create a Secure Shell key

Run the following command in the bash terminal to create a Secure Shell (SSH) key. When prompted, choose the default location, and choose a passphrase.

ssh-keygen -t rsa -b 4096 

Run the following command to set read and write permissions for your user on the private key:

chmod 600 ~/.ssh/id_rsa

Run the following command to print your public key to the terminal. Copy the public key and paste it into a notepad application for later.

cat ~/.ssh/id_rsa.pub

Upload and launch the SAM template

To deploy the SAM Template, we use the Cloud9 IDE.

1. Select this file to download the provided SAM template and unzip the file.

wget https://awsstorageblogresources.s3.us-west-2.amazonaws.com/blog802/TF+%26+IAM+Blog.zip && unzip ./'TF+&+IAM+Blog.zip' && mv ./TF\ \&\ IAM\ Blog/environment/transfer-workflow/ ./ && rm TF+\&+IAM+Blog.zip && rm -rf 'TF & IAM Blog' && cd ./transfer-workflow/

2. Confirm the SAM CLI is loaded by running the following command in the terminal:

sam  --version

3. Run the following command to deploy the SAM template:

sam deploy --guided --capabilities CAPABILITY_NAMED_IAM

4. When prompted, choose the following values:

  • Stack Name: transfer-workflow
  • AWS Region: Enter the AWS Region into which you would like to deploy the template (us-east-1)
  • MyEmailAddress: Enter your email address to receive SNS notifications
  • MySshKey: Your SSH public key
  • MyTranserFamilyUserName: Leave as the default “myTransferFamilyUser” by pressing “Enter”
  • Confirm changes before deploy: Enter “Y”
  • Allow SAM CLI role creation: Enter “Y”
  • Disable rollback: Enter “N”
  • Save arguments to configuration file: Enter “Y”
  • SAM configuration file: Leave as the default “toml” by pressing “Enter”
  • SAM configuration environment: Leave as default by pressing “Enter”
  • Deploy this change set: Enter “y”

For reference, when tested in the AWS Region us-east-1, the SAM template took approximately eight minutes to deploy.

  1. You receive an email from AWS Notifications when the stack creates the SNS topic and subscription. Choose to confirm the subscription.

Least privilege IAM roles policies

In this section, we explore the intersection between IAM and Transfer Family through user access control, logging, managed workflows, and custom workflow steps. Least privilege is implemented in each IAM role and policy.

The following architecture shows IAM policies for the Transfer Family managed file transfer solution, with least privilege implemented. First, the service managed users have their own IAM role and policy with access to Amazon S3. Next, the AWS Transfer Family server has an IAM role and policy with access to Amazon CloudWatch logs. Third, the Transfer Family managed workflow has an IAM role with policies to read and write from Amazon S3, and invoke the two custom step AWS Lambda functions. Lastly, the AWS Lambda functions have access to the resources they need, like Amazon DynamoDB and SNS, and the ability to report its step’s completion to the Transfer Family API.

This figure is an illustration of the Identity and Access Management (IAM) policies for the Transfer Family managed file transfer solution, with least privilege implemented. First, the service managed users have their own IAM role and policy with access to Amazon S3. Next, the AWS Transfer Family server has an IAM role and policy with access to Amazon CloudWatch logs. Third, the Transfer Family managed workflow has an IAM role with policies to read and write from Amazon S3, and invoke the two custom step AWS Lambda functions. Lastly, the AWS Lambda functions have access to the resources they need, like Amazon DynamoDB and Amazon Simple Notification Services (SNS), and the ability to report its step's completion to the Transfer Family API.

User access control

Transfer Family uses IAM for Transfer Family user permissions. In our example, the user only needs permissions to upload files to the Transfer Family server, which puts S3 objects into the S3 bucket S3BucketNewRecords. To implement the principle of least privilege, the user is allowed to read, write, and delete objects in the S3 bucket S3BucketNewRecords, but nothing else. To view the policy, navigate to the IAM roles console, and choose the role beginning transfer-workflow-TransferFamilyIAMRoleUser. Then, you can expand the policy named TransferFamilyIAMRoleUserIAMPolicy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::${NewRecordsBucketName}",
            "Effect": "Allow",
            "Sid": "AllowListingOfUserFolder"
        },
        {
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObjectVersion"
            ],
            "Resource": "arn:aws:s3:::${NewRecordsBucketName}/*",
            "Effect": "Allow",
            "Sid": "HomeDirObjectAccess"
        }
    ]
}

The user can read, write, and delete objects in the New Records S3 bucket. Although listing folders is allowed, accessing the archive bucket, or any other bucket, is not.

Logging

Transfer Family also uses IAM for Transfer Family server logging. We use the AWS managed policy named AWSTransferLoggingAccess. Navigate to the IAM roles console, and choose the role named transfer-workflow-TransferFamilyIAMRoleLogging. This is the AWS managed policy recommended for Transfer Family server logging.

This policy grants Transfer Family the permissions to create CloudWatch log streams and log groups, list the log streams for the log group, and upload a batch of log events to a log stream. These permissions provide enough access to provide the necessary visibility into the server, but still maintain least privilege by allowing no more access than is needed.

Note that in June 2023, Transfer Family announced a structured JSON format across all resources, such as servers, connectors, workflows, and all protocols. With this new log format, unless the user is using workflows, the selection of a logging role is no longer mandatory.

Managed workflow

The next way Transfer Family uses IAM is to run Transfer Family managed workflows. Transfer Family managed workflows orchestrate file-processing tasks and help you preprocess data. The orchestrator needs access to AWS services for each workflow step type. To implement least privilege authorization, permissions must be granted per workflow step. Navigate to the IAM roles console, and choose the role named TransferFamilyWorkflowExecution. There are policies for each step type.

The workflow has permissions to list the S3 Buckets New Records and Archive Records. Other buckets may exist, but because the managed workflow does not need access to other buckets to complete its task, access is restricted.

Within the New Records bucket, the workflow only has permission to read, tag, and delete objects. In the Archive Records bucket, the managed workflow is allowed to write and tag objects.

The workflow also has permissions to two Lambda functions, LambdaExtraction and LambdaException. The only permission granted to the managed workflow is the ability to invoke the Lambda functions.

Custom workflow steps

Lastly, Transfer Family managed workflow custom steps use Lambda functions that need an IAM role to write the step’s status back to the Transfer Family service. As previously mentioned, our architecture features two Lambda functions: one to load CSVs to DynamoDB and one for the exception handler in the case of a failed workflow.

The IAM policies associated with the two AWS Lambda functions scope down the authorization to only what is needed to perform the actions and report status to the workflow. To view these permissions in the console, navigate to the IAM roles console, and choose the roles named LambdaException and LambdaExecution.

The TransferFamilyWorkflowState policy is needed for both Lambda functions with Transfer Family workflows to send a success response to the orchestrator.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "transfer:SendWorkflowStepState",
            "Resource": "arn:aws:transfer:${AWS::Region}:${AWS::AccountId}:workflow/*",
            "Effect": "Allow"
        }
    ]
}

The roles’ other policies are based on the authorization needed to other AWS resources for each Lambda function. The LambdaException function is assigned an IAM role that includes an AWS Managed Policy called AWSLambdaBasicExecutionRole. This grants Lambda the ability to create CloudWatch log groups, create log streams, and add log events. The LambdaException role also includes policies to publish to the SNS topic. The second Lambda function’s role, LambdaExtraction, includes the same IAM policies as LambdaException, but adds a policy for writing to the DynamoDB table.

2. Test the workload by uploading and processing a CSV file

Next, let’s test our workflow by uploading the provided CSV.

  1. From the transfer-workflow directory in your Cloud9 instance, copy the value for the TransferFamilyServer from the Outputs section of your SAM deployment. The value is in this format: s-${Server-Id}.server.transfer.${AWS::Region}.amazonaws.com. Log in to the server by running the following command. When prompted with whether you want to connect, confirm yes.
sftp myTransferFamilyUser@<insert your Transfer Family endpoint here>

2. Run the following command to upload a file to your Transfer Family server. Upon upload, the file is processed by the Transfer Family workflow.

put demo.csv
  1. To confirm the Transfer Family workflow completed successfully, you can navigate to CloudWatch logs. Choose /aws/transfer/s-{Server-Id}, Search all log streams, then find the sequential steps of the Transfer Family workflow execution.
  2. To confirm your file is a tagging objects in the ArchivedBucket, navigate to S3. Choose the bucket beginning s3bucketarchivedrecords, myTransferFamilyUser, and csv. Under Tags, there is a tag with the Key of Archive and Value of true.
  3. Navigate to the DynamoDB tables console. To confirm your records have been extracted, select RecordsFromTransferredFiles, then Explore Table Items. Three items return.

3. Test how the solution handles exceptions by altering the permissions

In this section, we invoke the exception handler. To force our workflow to fail, we will remove permissions. We cause the workflow to fail by removing a required IAM policy that allows the Lambda function to write to DynamoDB. This is strictly for the purpose of demonstrating the exception handler.

  1. Navigate to the IAM role LambdaExtraction. Under Permissions, select WriteToDynamoDbTable, choose Remove, and confirm.
  2. Run the following command in your Cloud9 console. This workflow execution fails and invokes the exception handler.
 put demo.csv
  1. Navigate back to CloudWatch logs. There is a log entry for the stepName transferExtraction with a type of stepErrored. It may take five minutes for the logs to appear in CloudWatch.
{
    "type": "StepErrored",
    "details": {
        "errorType": "TIMEOUT",
        "stepType": "CUSTOM",
        "stepName": "transferExtraction"
    },
    "workflowId": "w-WorkflowId",
    "executionId": "ExecutionId",
    "transferDetails": {
        "serverId": "s-ServerId",
        "username": "myTransferFamilyUser",
        "sessionId": "SessionId"
    }
}

The workflow execution failed because the workflow does not have permissions to write to the DynamoDB table. The failure invokes the exception handler and you receive an email notification alerting you of the failed upload.

Cleaning up

We created several components that may incur costs. To avoid future charges, remove the resources with the following steps:

  1. Empty the S3 bucket’s contents created after you uploaded files from the sftp server, and then delete the bucket. Use caution in this step. Unless you are using versioning on your S3 bucket, deleting S3 objects cannot be undone.
  2. To delete the resources created by the SAM template, run the following command and follow the prompts:
sam delete --stack-name <stack-name>

This deletes the artifacts that were packaged.

  1. Navigate to AWS CloudFormation stacks, select the stack aws-sam-cli-managed-default, choose Delete, and confirm.
  2. Clean up the AWS Cloud9 environment.

Conclusion

This post demonstrates a sample architecture for managed file transfer workloads on Transfer Family using the principle of least privilege authorization. By designing with this principle in mind, you can curb inadvertent access and unnecessary risks. Least privilege was applied for user access control, logging, managed workflows, and custom workflow steps. SAM automates the implementation so you have a repeatable pattern to authorize each workflow step type. This can serve as a reference to accelerate your design of file transfer workloads with high security standards.

To learn more about writing least privilege IAM policies, visit Techniques for writing least privilege IAM policies. To learn more about Transfer Family, visit our documentation and product page.

We hope you have enjoyed our post. Happy building!

Blayze Stefaniak

Blayze Stefaniak

Blayze Stefaniak is a Senior Solutions Architect in the Federal Civilian space. He has experience working across industries such as healthcare, financial, and public sector. He is passionate about breaking down complex situations into something practical and actionable. In his spare time, you can find Blayze singing about dinosaurs with his daughter.

Stacy Conant

Stacy Conant

Stacy is a Solutions Architect working with DoD and US Navy customers. She enjoys helping customers understand how to harness big data and working on data analytics solutions. On the weekends, you can find Stacy crocheting or knitting, reading Harry Potter (again), playing with her dogs and cooking with her husband.

Emma Ng

Emma Ng

Emma is a Solution Architect focused on helping customers in the Federal Civilian space. She is passionate about storage, security, and helping customers become well-architected. When not working, Emma loves weightlifting, running, and spending time with her family and dogs.