AWS Storage Blog

Architecting secure and compliant managed file transfers with AWS Transfer Family SFTP connectors and PGP encryption

Users in industries such as financial services and healthcare regularly exchange files with their external business partners containing sensitive and regulated datasets, such as Personal Identifiable Information (PII) and financial records. These file transfers often happen over the Secure File Transfer Protocol (SFTP) and encrypting files using Pretty Good Privacy (PGP) before transfer is often a key requirement. PGP is a popular encryption system that provides an additional layer of protection for sending files, and makes sure that only the intended parties have access to your data. In addition to encryption, users often need to configure additional processing on these files before and after transfers.

AWS Transfer Family provides fully managed SFTP connectors to transfer files between partner-hosted SFTP servers and Amazon Simple Storage Service (Amazon S3). Customers using SFTP connectors to transfer files containing sensitive datasets, need to automate their workflows to encrypt files using PGP before they are sent to remote partners along with other pre-processing steps.

In a previous post, we provided a reference architecture to implement PGP decryption of files that are uploaded over Transfer Family server endpoints using Transfer Family Managed Workflows. In this post, we provide a reference architecture and sample solution for implementing PGP-based encryption, along with additional custom processing, for files sent to external partner-hosted SFTP servers using SFTP connectors. Using this reference architecture, customers can automate their file processing and file transfer workflows to share sensitive datasets with external business partners.

Solution overview

This post illustrates an event-driven architecture for pre-processing, encrypting, and sending files to external partners over the SFTP protocol using Transfer Family and additional supporting services, such as Amazon S3, AWS Step Functions, Amazon DynamoDB, AWS Lambda, AWS Secrets Manager, Amazon EventBridge, Amazon SNS, and Amazon SQS.

At the core of this architecture, we use a Step Functions state machine to execute the steps for processing and encrypting the files before sending them to an external SFTP endpoint. The process starts when a file is uploaded to the landing S3 bucket with EventBridge enabled, executing the state machine.

The state machine executes the following steps:

  1. Retrieve partner’s parameters from DynamoDB.
  2. Execute custom process (transform a CSV file into a JSON file).
  3. Encrypt the processed file using the PGP public key stored in Secrets Manager.
  4. Send the encrypted file to the partner by using a Transfer Family SFTP connector.
  5. Verify the status of the file transfer.
  6. Delete the original and processed files if the file transfer is successful.

If any steps fail execution, then the state machine sends a notification to an SNS topic to notify the failure.

The following diagram shows the entire architecture for pre-processing, encrypting, and sending files to external partners over the SFTP protocol using Transfer Family and additional supporting services

Figure 1: Architecture for pre-processing, encrypting, and sending files to external partners over the SFTP protocol using Transfer Family and additional supporting services

We emulate the partner’s SFTP server by using a Transfer Family server and an S3 bucket created by the AWS CloudFormation template. To learn about setting up a Transfer Family server, refer to this Amazon documentation.

Prerequisites

For this post, we provide a CloudFormation template that deploys the required resources. Download the template using this GitHub link and selecting the Download raw file button as per the below screenshot.

Figure 2: The following screenshot shows how to download the CloudFormation template

Figure 2: How to download the CloudFormation template

In the CloudFormation console, when you create the stack, provide “connectors-pgp-blog” as the Stack name, as we use this name to identify resources throughout the post.

Keep the options as default, check the box that says I acknowledge that AWS CloudFormation might create IAM resources, and then choose Submit.

Figure 3: The following screenshot shows the check box that says “I acknowledge that AWS CloudFormation might create IAM resources”

Figure 3: Check box that says I acknowledge that AWS CloudFormation might create IAM resources

It should take approximately 5 to 10 minutes to create the stack.

Once the stack is deployed, keep the CloudFormation Outputs tab open. This tab contains information that you need later.

Walkthrough

For this walkthrough, we go through the following steps:

1. Configure Transfer Family server.
2. Configure Transfer Family SFTP connector.
3. Configure the partner’s parameters in DynamoDB.
4. Test the solution end-to-end.

We use AWS CloudShell for executing commands through the entire post. Make sure to leave the browser’s tab with CloudShell open. Note that CloudShell is not supported in all AWS Regions. See the Amazon documentation for details.

Step 1: Configure the Transfer Family server

This is a two-step process.

1.1. Generate an SSH key pair
1.2. Import the public key for the user named “testuser” in the Transfer Family server created by the CloudFormation template.

Step 1.1. Generate the SSH key pair

Transfer Family accepts RSA-, ECDSA-, and ED25519-formatted keys. We use an RSA 4096-bit key pair.

1.1.1. In CloudShell, generate an SSH key pair by running the following command:

ssh-keygen -t rsa -b 4096 -m PEM -f partner_01

1.1.2. When prompted to enter a passphrase, do not type anything and select Enter twice.

Figure 4: The following screenshot shows the output of the ssh-keygen command

Figure 4: Output of the ssh-keygen command should show the key information, such as where the public key is saved, key fingerprint, and key’s randomart image.

Step 1.2. Import the SSH public key in Transfer Family

1.2.1. Output the SSH public key by running the following command:

cat partner_01.pub

1.2.2. Copy the output of the command, as you paste it in the SSH public key section of your Transfer Family test user.

1.2.3. Open the Transfer Family console, then select Servers from the navigation pane.

1.2.4. On the Servers page, select the Server ID for the server that was created by the CloudFormation template.

1.2.5. Select user “testuser” and in the SSH public keys pane, choose Add SSH public key.

Figure 5: The following screenshot shows which button to select to add an SSH public key for your Transfer Family user

Figure 5: Add SSH public key button to add an SSH public key for your Transfer Family user

1.2.6. Paste the text of the public key you copied before into the SSH public key text box, and then choose Add key.

Figure 6: The following screenshot shows how the SSH key should look in the text box

Figure 6: The SSH key in the text box

Step 2: Configure the SFTP connector

When you create a Transfer Family SFTP connector, provide the following configuration parameters:

  • The credentials to authenticate to the remote SFTP server.
  • The URL of the SFTP server you want to connect to (it can be found in the CloudFormation Outputs tab as ServerURL).
  • The ARN of the AWS Identity and Access Management (IAM) role that Transfer Family assumes.
  • The public portion of the host key that identifies the external server.

Additionally, we need to store the PGP public key to encrypt files before sending them using the SFTP connector. In the next steps, we generate the PGP key pair and store the PGP public key in Secrets Manager along with the credentials to authenticate to the remote SFTP server.

2.1. Generate the PGP key pair
2.2. Export the PGP key pair
2.3. Export the SSH private key to authenticate into the remote SFTP server
2.4. Update the secret in Secrets Manager
2.5. Identify the trusted host key of the remote SFTP server
2.6. Create the SFTP connector

Step 2.1. Generate the PGP key pair

In a real-world scenario, your business partner would provide you with their public PGP key that you would use to encrypt the files before sending it over. For this post, in CloudShell you generate a PGP key pair consisting of public key and private key. You use the public key to encrypt the files before sending them to the SFTP server.

2.1.1. Install GPG by running the following command in CloudShell:

sudo dnf install --allowerasing gnupg2-full -y

2.1.2. Generate a PGP key pair by executing the following command:

gpg --full-generate-key

You are prompted to enter certain specifications for your key pair:

2.1.3. When prompted to “Please select what kind of key you want” select “1”, and then select “Enter”, which chooses option 1 RSA.

2.1.4. Accept the default key size of 3072 bits. When prompted with “What keysize do you want?”, select Enter to choose the default 3072 bits key size.

2.1.5. Next you are asked “Key is valid for?” Select Enter, which means the key never expires.

2.1.6. Review the configuration and then select y to confirm.

Figure 7: The following screenshot shows the output of the gpg –full-generate-key command

Figure 7: The correct example key inputs, selecting ‘1’ for which kind of key to use, ‘0’ for ensuring the key does not expire, and ‘y’ for confirming this information is correct

2.1.7. Now you are prompted to construct a user ID to identify your key pair. You must provide:

      • Real name: we use “testuser” for this example.
      • Email: enter the email you would like associated with this key pair. We use this email later when we encrypt a file. For this example, we use “testuser@example.com”.
      • Comment: leave this blank by selecting Enter.
      • Verify the information you entered and accept by selecting O for Okay.
      • Passphrase: make sure you store your passphrase in a secure manner, so you don’t forget it for future use.

Figure 8: The following screenshot shows the correct testuser input information for the gpg –full-generate-key command

Figure 8: The correct testuser input information for the gpg –full-generate-key command

Step 2.2. Export the PGP key pair

2.2.1. Export the PGP public key to a file by running the following command:

gpg --output testuser-public-gpg --armor --export testuser@example.com

2.2.2. PGP keys need to be formatted with embedded newline characters (“/n”) in JSON format. Format the key by running the following command:

jq -sR . ./testuser-public-gpg

2.2.3. Copy the output of the command and paste it into your text editor of choice for later use.

Figure 9: The following screenshot shows the PGP public key in the correct format

Figure 9: The PGP public key in the correct format

Step 2.3. Export the SSH private key to authenticate into the remote SFTP server

Transfer Family’s SFTP connectors can use either the SSH key, password, or both to authenticate into a remote SFTP server. In our case, we authenticate to the external SFTP server using the private SSH key, which also needs to be formatted with embedded newline characters (“/n”) in JSON format.

2.3.1. Format the private SSH key you generated in Step 1.1 by running the following command:

jq -sR . ./partner_01

2.3.2. Copy the output of the command, it is needed the next step.

Figure 10: The following screenshot shows the SSH private key in the correct format

Figure 10: The SSH private key in the correct format

Step 2.4. Update the secret in Secrets Manager

2.4.1. Open the Secrets Manager console, in the left navigation pane, choose Secrets. Then select the secret named “aws/transfer/connector-partner_01”.

Figure 11: The following screenshot shows which secret to select in AWS Secrets Manager

Figure 11: Selecting a secret in AWS Secrets Manager

2.4.2. In the Overview tab, choose Retrieve secret value.

Figure 12: The following screenshot shows where to select the “Retrieve secret value” button in AWS Secrets Manager

Figure 12: Selecting the Retrieve secret value button in AWS Secrets Manager

2.4.3. Select Edit and then Plaintext.

      • Note that you must edit the secret through the Plaintext method, as the Key/value method does not format the keys correctly.

Figure 13: The following screenshot shows how the example secret looks

Figure 13: How the example secret looks

2.4.5. Replace the part that says “PASTE-SSH-PRIVATE-KEY-HERE” with your SSH private key copied from Step 2.3.

2.4.6. Replace where it says “PASTE-PGP-PUBLIC-KEY-HERE” with your PGP Public Key that you previously pasted in your text editor in Step 2.2.

2.4.7. Finally, choose Save to update the secret.

Figure 14: The following screenshot shows how to select the “Save” button to save your updates secret values

Figure 14: Selecting the Save button to save your updates secret values

2.4.8. Select the Key/value tab, and your secret should contain the following Key/value pairs:

      • Username: testuser
      • PrivateKey: <user’s private key>
      • PGPPublicKey: <user’s public key>

Figure 15: The following screenshot shows how the example secret should look after pasting in your values

Figure 15: How the example secret should look after pasting in your values

Step 2.5. Identify the trusted host key of the remote SFTP server

2.5.1. Note the Endpoint value of the Transfer Family server created by the CloudFormation template.

Figure 16: The following screenshot shows where to select the AWS Transfer Family endpoint value.

Figure 16: Selecting the AWS Transfer Family endpoint value

2.5.2. Retrieve the host key of the SFTP server created by the CloudFormation template by running the following command:

ssh-keyscan <server-endpoint>
      • The command output will include the SFTP server’s hostname followed by the server’s public host key:

Figure 17: The following screenshot shows which parts of the output from the ssh-keyscan command to select

Figure 17: The parts of the output from the ssh-keyscan command to select

2.5.3. Copy the SSH host key and paste it in your text editor. You need the key in the next step.

Step 2.6. Create the SFTP connector

Now that we have the necessary prerequisites, we can create the SFTP connector.

2.6.1. Open the Transfer Family console, in the left navigation pane, choose Connectors, then choose Create connector.

Figure 18: The following screenshot shows which buttons to select to create an AWS Transfer Family connector

Figure 18: Creating an AWS Transfer Family connector

2.6.2. In the Create connector page, choose SFTP for the connector type, and then choose Next.

2.6.3. In the Connector configuration section, provide the following information:

      • For the URL enter the server URL you find in the CloudFormation Outputs tab. It should look like the following: “sftp://s-xxxxxxxx.server.transfer.<aws_region>.amazonaws.com”
      • For both Access role and Logging role, choose the IAM role named “connectors-pgp-blog -SFTPConnectorRole-xxx”.

Figure 19: The following screenshot shows how to configure the AWS Transfer Family SFTP connector

Figure 19: Configuring the AWS Transfer Family SFTP connector

2.6.4. In the SFTP Configuration section, provide the following information:

      • For Connector credentials, from the dropdown list, choose the secret named “aws/transfer/connector-partner_01”.
      • For Trusted host keys, paste in the public portion of the host key that you retrieved earlier using the ssh-keyscan command. You should have the host key in your text editor.

Figure 20: The following screenshot shows the remaining configuration for your AWS Transfer Family SFTP connector

Figure 20: Remaining configuration for your AWS Transfer Family SFTP connector

2.6.5. Finally, choose Create connector to create the SFTP connector. Note the Connector ID, as you need it for the next step.

2.6.6. If the connector is created successfully, then a screen appears with a list of the assigned static IP addresses and a Test connection button. Use the button to test the configuration for your new connector.

Figure 21: The following screenshot shows where to select the “Test connection” button

Figure 21: Selecting the Test connection button

      • The output should be as follows, showing Connection succeeded:

Figure 22: The following screenshot shows how the successful test connection should look

Figure 22: How the successful test connection should look, with Connection succeeded

Step 3: Configure the partner’s parameters in DynamoDB

In this step, you create an item in the DynamoDB table “connectors-pgp-blog-CustomFileProcessingTable-xxx” containing the necessary parameters to define an outbound file transfer:

  • partnerId: The ID of the partner you are sending the file to.
  • lambdaARN: The ARN of the Lambda function that transforms CSV files into JSON files.
  • pgpSecret: The ARN of the secret where we stored the PGP public key.
  • outputBucket :The name of the S3 bucket where we stage the file.
  • connectorId: The ID of the SFTP connector we want to use to send the file through SFTP.

3.1. Open the DynamoDB console.

3.2. In the left navigation pane, choose Tables, then select table “connectors-pgp-blog-CustomFileProcessingTable-xxx”.

3.3. Choose Explore table items, then choose Create item.

3.4. In the Attributes section, for partnerId enter “partner_01”, choose Add new attribute and then select String.

Figure 23: The following screenshot shows where to select to add new attributes to the Amazon DynamoDB table

Figure 23: Selecting Add new attribute to add new attributes to the Amazon DynamoDB table

3.5. For Attribute name enter “lambdaARN”, and for Value enter the ARN of the Lambda function named “connectors-pgp-blog-CSVtoJSONLambdaFunction-xxx”. In the CloudFormation Outputs tab, the Key is “CSVtoJSONLambdaFunctionARN”.

3.6. Add a new string attribute with Attribute name “pgpSecret” and Value the ARN of the secret named “aws/transfer/connector-partner_01”. In the CloudFormation Outputs tab, the Key is “PartnerExampleSecret”.

3.7. Add a new string attribute with Attribute name “outputBucket” and Value “connectors-pgp-blog-outboundtransfers3bucket-xxx”. In the CloudFormation Outputs tab, the Key is “OutboundTransferS3Bucket”.

3.8. Finally, add a new string attribute with Attribute name “connectorId” and Value the ID of the SFTP connector you created in Step 2.5, and then choose Create item to store the parameters for “partner_01”.

Figure 24: The following screenshot shows where to select the “Create item” button to update your Amazon DynamoDB table item with the correct attributes

Figure 24: Selecting the Create item button to update your Amazon DynamoDB table item with the correct attributes

Step 4. Test end-to-end

To test the entire workflow, you create a simple dataset in CSV format and upload it to the S3 landing bucket, specifying the Amazon S3 prefix matching the partnerId in the DynamoDB table. For this example, the partnerId is “partner_01” and thus the Amazon S3 prefix is “partner_01/”. On the landing bucket, EventBridge is enabled and executes the state machine when the new object is created.

4.1. Set environment variables in CloudShell
4.2. Create an example CSV file
4.3. Upload the CSV file to the landing S3 bucket
4.4. Review the state machine execution
4.5. Review the Amazon S3 content at destination

Step 4.1. Set environment variables in CloudShell

4.1.1. To test the workflow we use the AWS Command Line Interface (AWS CLI). To simplify the execution of the CLI commands, we set environment variables by running the following commands in CloudShell:

export STACK_NAME=connectors-pgp-blog

export LANDING_BUCKET=`aws cloudformation describe-stacks | jq -r --arg STACK_NAME "$STACK_NAME" '.Stacks[] | select(.StackName==$STACK_NAME) | .Outputs[] | select(.OutputKey=="LandingS3Bucket") | .OutputValue'`

export OUTPUT_BUCKET=`aws cloudformation describe-stacks | jq -r --arg STACK_NAME "$STACK_NAME" '.Stacks[] | select(.StackName==$STACK_NAME) | .Outputs[] | select(.OutputKey=="OutboundTransferS3Bucket") | .OutputValue'`

export SFTP_BUCKET=`aws cloudformation describe-stacks | jq -r --arg STACK_NAME "$STACK_NAME" '.Stacks[] | select(.StackName==$STACK_NAME) | .Outputs[] | select(.OutputKey=="SFTPServerS3Bucket") | .OutputValue'`

Step 4.2. Create an example CSV file

4.2.1. Create a sample CSV file by running the following command:

echo -e "City,State,Population\nSalt Lake City,Utah,1000000" > dataset.csv

Step 4.3. Upload the CSV file to the landing S3 bucket

4.3.1. Upload the sample CSV file to the landing S3 bucket by running the following command:

aws s3api put-object --body dataset.csv --bucket $LANDING_BUCKET --key partner_01/dataset.csv

Step 4.4. Review the state machine execution

4.4.1. Open the Step Functions console and select the state machine named “CustomFileProcessingStateMachine-xxx”.

4.4.2. Select the latest execution and take some time to review the steps executed.

The last step executed by the state machine (Delete Originally Uploaded File) invokes the Lambda function “connectors-pgp-blog-DeleteFileLambdaFunction-xxx”, which is responsible for deleting the original file partner_01/dataset.csv in the landing S3 bucket, and files partner_01/dataset.json and partner_01/dataset.json.gpg in the output S3 bucket.

4.4.3. Confirm the file partner_01/dataset.csv was successfully deleted by running the following command:

aws s3api list-objects-v2 --bucket $LANDING_BUCKET
      • The output should not contain any objects.

4.4.4. Now confirm that the files partner_01/dataset.json and partner_01/dataset.json.gpg were successfully deleted from the output bucket:

aws s3api list-objects-v2 --bucket $OUTPUT_BUCKET
      • The output should not contain any objects.

Step 4.5. Review the Amazon S3 content at destination

4.5.1. Confirm the file “json.gpg” was received by the SFTP server and stored in the S3 bucket used as backend storage by running the following command:

aws s3api list-objects-v2 --bucket $SFTP_BUCKET
      • The output should show the key as ”dataset.json.gpg”:
{
    "Contents": [
        {
            "Key": "dataset.json.gpg",
            "LastModified": "2024-03-01T01:41:15+00:00",
            "ETag": "\"5b860964174bca41703e8885bcb35caa\"",
            "Size": 20508,
            "StorageClass": "STANDARD"
        }
    ],
    "RequestCharged": null
}

4.5.2. Download the encrypted file to CloudShell by running the following command:

aws s3 cp s3://$SFTP_BUCKET/dataset.json.gpg .

4.5.3. Decrypt the file by running the following command:

gpg --output dataset.json --decrypt dataset.json.gpg

4.5.4. When prompted, enter the Passphrase you configured in Step 2.1. The output should show the key type, fingerprint, creation date, and name/email used for the key:

Figure 25: The following screenshot shows the output from the gpg decrypt command

Figure 25: Output from the gpg decrypt command

4.5.5. Finally, verify the content of the decrypted file by running the following command:

cat dataset.json | jq '.'
      • The output should show the sample CSV file we created at step 4.2.1 converted into JSON format:

Figure 26: The following screenshot shows how your example dataset.json file should look

Figure 26: How your example dataset.json file should look

Cleaning up

You created several components that may incur costs. To avoid future charges, remove the resources with the following steps:

  • Delete the S3 bucket content by running the following commands in CloudShell:
aws s3 rm --recursive s3://$LANDING_BUCKET
aws s3 rm --recursive s3://$OUTPUT_BUCKET
aws s3 rm --recursive s3://$SFTP_BUCKET
  • Delete the SFTP Connector you created in Step 2.6.
  • Delete the CloudFormation stack “connectors-pgp-blog”.

Conclusion

In this post, we walked through creating an event-driven architecture for pre-processing, encrypting, and sending files to external partners over the SFTP protocol. This included configuring an AWS Transfer Family SFTP connector with authentication credentials and a trusted host key, generating and storing a PGP public key in AWS Secrets Manager, defining partner parameters in Amazon DynamoDB, and orchestrating the workflow via an AWS Step Functions state machine. The state machine executed steps to retrieve partner configurations, process the file, encrypt it with PGP, transfer it over SFTP using the configured connector, verify status, and clean up temporary files.

Using this solution, customers can define automated workflows to share sensitive files with their external business partners while maintaining compliance with their data security requirements.

To learn more about Transfer Family, visit our documentation and product page.

Fabio Lattanzi

Fabio Lattanzi

Fabio is a Sr. Solution Architect focused on AWS Transfer Family and Amazon S3. He enjoys helping customers build the most durable, scalable, performant, and cost-effective storage solutions for their use case. He is based in Utah, loves traveling with his wife, cuddling with his two dogs, and playing drums.

Lawton Pittenger

Lawton Pittenger

Lawton is an Associate Solution Architect focused on AWS security services. Professionally, Lawton has worked in IT security roles, leading compliance initiatives and managing infrastructure environments. Today, he enjoys helping customers build the most secure and durable solutions for their use case.