AWS Storage Blog

Enabling user self-service key management with AWS Transfer Family and AWS Lambda

Customers who use the AWS Transfer Family service are typically exchanging files with their business partners who provide them with SSH public keys. In a large-scale deployment of the AWS Transfer Family service, public key management eventually becomes a time-consuming task to refresh expired keys and rotate keys for security. When using custom identity providers (custom IdP), many customers request ways to provide end users with the ability to manage their SSH public keys on their own. This helps administrators reduce their overhead and save time while maintaining a high standard of security.

In this blog post, I cover how you can leverage AWS Lambda as a custom IdP, and use this AWS CloudFormation template to deploy a working solution so that your end users can authenticate using both password-based or public key based authentication. To do this, I use a new capability of Transfer Family where a Lambda function is triggered for end-user authentication. This results in an overall simplified architecture. The resulting architecture is a simplified deployment of an AWS Transfer Family solution that uses Amazon Cognito for password-based authentication and allow end-user access to public keys stored in an encrypted S3 bucket.

Walkthrough

First, storing your public keys in an Amazon Simple Storage Service (Amazon S3) bucket allows AWS to provide your end users with the functionality to manage their own keys. It is important to mention that end users are not allowed to delete any keys ensuring that access is not accidentally lost. Second, Amazon Cognito provides authentication and end-user management functionality required for password-based authentication. Amazon Cognito user pools are user directories that provide sign-in options for your users. By using a combination of Amazon S3, Amazon Cognito, and AWS Lambda, you can build a flexible, scalable solution capable of managing end-user authentication without the administrative overhead, especially as your end users integrate the Transfer Family service into their workflows.

Authentication-workflow-AWSTransferFamily

Figure 1: Authentication workflow

Let’s start by covering this authentication workflow architecture diagram. For this deployment, I am using Lambda as the custom IdP. The workflow for user authentication and authorization is as follows:

  1. An end user or an application initiates a password-based authentication or public key authentication.
  2. The AWS Transfer Family service passes the credentials to a Lambda function provided during the CloudFormation deployment.
  3. The Lambda function checks the password status and initiates an Amazon Cognito user authentication request for password-based authentication if the password field is not empty.
  4. If the password field is empty and an SFTP protocol is used, then the Lambda function returns all the public keys associated with the user from the public keys S3 bucket.
  5. Once the Lambda function validates the login, additional user configurations are returned to the Transfer Family server. This user configuration includes Logical Directories along with AWS Identity and Access Management (IAM) roles and policies required to access the folder in an S3 bucket.

AWS CloudFormation template parameter details

To get started, use the AWS CloudFormation template available here. This template launches an AWS Transfer Family endpoint, an Amazon Cognito user pool, associated authentication Lambda functions, a S3 bucket for storing the public keys, and another S3 bucket to store end-user data. The following screenshot displays the parameter details for the template:

AmazonCognito-parameter-details

If you are looking for the CloudFormation template which deploys the solution discussed in the blog post with an API Gateway configuration, it is available here. Detailed information about this type of deployment is available in the Transfer Family documentation available at this link.

Details of the resources deployed by the CloudFormation

Now that the AWS CloudFormation template has been deployed. Let’s go over details of the resources that are deployed from the CloudFormation Template. I will provide details of how each of the resource will be utilized in the AWS Transfer Family architecture shown in Figure 1.

AWS Transfer Family endpoint

AWS Transfer Family assumes an IAM role to access Amazon S3 on behalf of your connecting user. When the AWS Transfer Family endpoint is created, the Lambda function is provided as a custom IdP. The IAM role and policy that provide access to the S3 bucket are part of the Lambda function response.

AWS Lambda function

The Lambda function consists of a Python 3.9 code that is triggered by the Transfer Family server directly. It takes the input parameters from the Transfer Family server event request and starts checking if it is a password-based or a public key based authentication.

The event request template that the Lambda function receives is as follows:

{'username': 'testuser', 
'sourceIp': '000.000.000.000', 
'protocol': 'SFTP', 
'serverId': 's-1234aaaaaaaaaa567', 
'password': 'testuser'}

The Lambda function follows the following logic:

  • If the password is not empty, then authenticate the users with the Amazon Cognito user pool.
    Additional trigger can be added to migrate users from existing user directory using migration Lambda found in this link.
  • If the password is empty and the SFTP protocol is called, then find and return the public keys in the S3 bucket.

The following is the Python code skeleton for the Lambda that implements the logic mentioned above:

if event.get("password", "") != "":
	# Cognito user authentication query to cognito user pool
else:
if event["protocol"] == 'SFTP' :
	# Returns public keys associated with the user fetched from S3 
else:
	     # Returns an error with password not being empty

AWS IAM Roles

The CloudFormation stack creates four IAM roles:

  • TransferS3AccessRole – This role grants authenticated user access to Transfer folder in S3 data and public keys bucket
  • UserAuthenticationLambdaExecutionRole – Grants authentication Lambda access to Amazon Cognito and Amazon S3 bucket for authentication purpose
  • TransferCloudWatchLoggingRole – This role uses AWS provided managed policy AWSTransferLoggingAccess to enable AWS Transfer Family to create and send to CloudWatch Logs stream
  • TransferIdentityProviderRole – Allows AWS Transfer Family to trigger the Lambda function for authentication purpose

Amazon S3 bucket

Customer data bucket that stores data for all the users. Each user will have their own folders and permissions are configured to allow access to folder with the same name as that of the username.

Public keys bucket stores public keys for all the users. Similar to customer data, public keys bucket will also have a folder name which is same as the username. Users will be able to manage their own public keys. IAM Policy disallows users from deleting public keys, to safeguard against accidental deletion of all keys. Additionally, only *.pub file types are allowed in the folder as per the IAM policy. This can be changed by editing the IAM policy that’s returned by Lambda. Additional details are provided in the Public Key Management section.

Amazon Cognito

A user pool is a user directory in Amazon Cognito. The stack creates a user pool that allows users the ability to sign in to AWS Transfer Family.
An app client for the user pool is also created that has permissions to call unauthenticated API operations (operations that do not have an authenticated user)

Steps to test the deployment

In this section, I cover how to test the AWS Transfer Family server that was deployed using the AWS CloudFormation template. The Lambda custom IdP supports both password-based and public-key based authentication, so I will discuss how you can test both types of authentications separately.

Password-based authentication

I start by creating a test user in an Amazon Cognito user pool with the following AWS CLI command. The UserPoolId parameter required for this command is available in the outputs of the CloudFormation stack.

aws cognito-idp admin-create-user --user-pool-id <UserPoolId> --username <username>

The user created with this command has the UserStatus of ‘FORCE_CHANGE_PASSWORD’.

To reset the password for the newly created user and CONFIRM the user, I run the AWS CLI command.

aws cognito-idp admin-set-user-password --user-pool-id <UserPoolId> --username <username> --permanent --password <password>

Next, I create a folder with the user’s name in the customer data S3 bucket. The bucket name is found in the output of the CloudFormation stack.

aws s3api put-object --bucket <BucketName> --key <username>/

The AWS CLI provides a function to test whether the external authentication for AWS Transfer Family is working as expected. I run the test identity provider command so that I can test the user’s credentials. This AWS CLI command is as follows:

aws transfer test-identity-provider --server-id <server-id> --user-name <username> --user-password <password>

Public key-based authentication

To perform a public keys-based authentication with the Transfer Family server:

I start with the creation of SSH Private and Public Key using this link. Command is given below. This can be run on any Linux server.

ssh-keygen -m PEM

Next, create the folders in both the S3 buckets with the folder name as user name with the AWS CLI commands below

aws s3api put-object --bucket <CustomerDataBucketName> --key <username>/
aws s3api put-object --bucket <PublicKeyBucketName> --key <username>/

Upload Public Key generated in first step to Public Key S3 bucket using the console under the user name folder or use the following command

aws s3api put-object --bucket <PublicKeyBucketName> --key <username>/<publickeyname>.pub --body <publickeyname>.pub

The transfer server is an internal endpoint (not publicly accessible). Hence, to test the public key-based authentication, you need an EC2 bastion host that has relevant routes and security groups to the internal endpoint. Log in with public keys using the following bash command:

sftp username@s-xxxxxxxxxxxx.server.transfer.region.amazonaws.com -i privatekey.pem

Logical directories

Logical directories provide you with the ability to construct a virtual directory structure in which your users can navigate. You can provide users a level of abstraction where the S3 bucket names are not disclosed and provide a better user experience. This can be achieved by providing a list of Entry and Target pairings.

In Authentication Lambda, there are two logical directories mapped – one of entry target is for user name and second is named public keys. The Python code at line 62 and 63 is where we provide entry and target information as shown below:

directorymapping = [
{"Entry": "/" + event["username"], "Target": "/"+ BUCKETNAME + "/" + event["username"]}, 
{"Entry": "/publickeys", "Target": "/"+ PUBLICKEYSBUCKETNAME + "/" + event["username"]}]

Transfer Server prohibits from using root map ‘/’ and second logical directory mapping with a different name. If there is a need to map root “/”, then ensure that there is only one mapping. Be mindful, this change takes away the feature of users being able to manage their own keys. Detailed information about logical directories can be found in this blog.

Managing public keys

One of the biggest benefits of using Amazon S3 to store public keys is providing users with the ability to manage their own public keys. The permissions provided by the IAM Policy, that’s included in the AWS Lambda response, will allow users to add more public keys to the folder.

This is achieved by adding IAM policy statement which provides access to get and put objects Public Keys Bucket in Lambda.

{"Sid": "GetAccessForPublicKeys",
      "Effect": "Allow",
      "Action": ["s3:GetObject","s3:GetObjectAcl","s3:GetObjectVersion","s3:GetBucketLocation"],
      "Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"]
          ,"arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*"]},
{"Sid": "PublicKeysPubFilePutAccess",
     "Effect": "Allow",
      "Action": ["s3:PutObject","s3:PutObjectAcl"],
      "Resource": ["arn:aws:s3:::" + PUBLICKEYSBUCKETNAME + "/" + event["username"] + "/*.pub"]},

Additionally, the last line in the Python code above is used to ensure only *.pub files are written to the Public Keys folder. Finally, users don’t have delete permissions as that could lead to further complications where a user could lose access to their folders completely if all keys are accidentally deleted.

Cleaning up

To avoid ongoing charges for the resources you created, you should start with emptying the S3 buckets that were created and then proceed with deleting the CloudFormation stack that was deployed.

For cost details, please refer to AWS Transfer Family, Amazon Cognito and Amazon S3 pricing pages.

Summary

In this blog post, I showed you how to deploy a fully managed, highly available AWS Transfer Family solution that provides your end-users with the ability to manage their own public keys, which will reduce administrative overhead for you and provide your end-users a way to refresh expired keys and rotate keys for maintaining security as a priority. The custom identity provider discussed in the blog enables users to authenticate via passwords and allows the use of public keys for authentication. You can utilize the information provided for Python in AWS Lambda to build more complex variations for authentication and authorization. This way you can ensure that future innovations for authentication for your AWS Transfer Family solution can be easily rolled out. It’s also a common practice to use a separate authentication and authorization solution for FTP users due to its unencrypted communication.

To learn more about AWS Transfer Family and other resources mentioned in this blog, check out the following resources:

Thanks for reading this blog post. If you have any comments or questions, please do not hesitate to leave a comment.