AWS Storage Blog

Accelerate SAP workload migrations with AWS Transfer Family

Companies operate applications in their corporate IT landscape that rely on transfer protocols, such as Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol Secure (FTPS), and File Transfer Protocol (FTP). Historically, FTP has been used to share files such as invoices, inventory lists and delivery notes that are created by SAP systems, which then are transferred to third parties or customers.

With AWS Transfer Family, you can continue running the same secure file transfer process without any changes to the client. Customers can access files in Amazon Elastic File System (EFS) or Amazon Simple Storage Service (S3) via fully managed FTP/FTPS or SFTP and easily migrate data to the cloud, enabling further processing with other AWS services.

In this blog post, we show you how we at T-Systems, an AWS Partner, helped one of our customers accelerate their SAP workloads by migrating to AWS Transfer Family. Our goal was to migrate to the AWS environment in the shortest possible time, without additional development effort. Many of the SAP interfaces in the customer environment originate from legacy applications and systems which use FTP scripts. Our task was to take over the specified data transfer protocols via FTP and at the same time provide a secure and highly available solution.

Challenges

Our automotive enterprise customer is currently migrating SAP workloads (including ERP Central Component, Business Warehouse, Supply Chain Management, and more) to AWS. The goal is to migrate to AWS as quickly as possible without additional development effort. Many existing SAP interfaces in the customer environment use FTP scripts. Due to security concerns, the FTP protocol is frequently being replaced by SFTP or FTPS, but many customers still continue to use FTP internally between their offices and sites.

Our task was migrate the existing workloads which use FTP, and also to provide a secure and highly available solution to transfer data to AWS. Given the fact that the FTP protocol is an insecure transfer protocol, we had to ensure that we secured the workload with AWS native features and at the same time create a solution that would allow us to switch to the secure SFTP protocol at a later stage.  The customer has many existing FTP client scripts which will require significant effort and time to refactor. Using a custom identity provider with AWS Transfer Family, we can define existing users and passwords. This enables the customer to migrate to Transfer Family without any changes to their existing FTP client scripts.

Solution architecture

In this setup, our customer is using a site-to-site VPN connection to connect to the AWS environment from his on-premises remote site via FTP protocol. An AWS Transit Gateway routes the traffic to the corresponding VPC. The AWS Transfer Family FTP service is deployed as VPC hosted endpoint type for internal access. As the customer only needs to connect from his on-premises remote site, we protect the unencrypted FTP protocol from public access. For high availability reasons, the AWS Transfer Family FTP service is spread between different AWS Availability Zones. The AWS Secrets Manager stores the individual FTP user credentials, which will be processed by an AWS Lambda function as custom identity provider. The target is an Amazon EFS volume which is spread across two Availability Zones for high availability and later processed by SAP EC2 instances. For the connection from AWS Transfer Family FTP service to the Amazon EFS volume, an IAM role with appropriate access rights is used. An additional IAM role is used to write the logs of the AWS Transfer Family FTP services to an AWS CloudWatch log group.

AWS Transfer Family FTP solution architecture

Solution Walkthrough

For this walkthrough, we:

  1. Configure the custom identity provider using CloudFormation templates
  2. Create the AWS Transfer Family FTP server using the AWS Management Console and set appropriate configurations for later authentication using the custom identity provider.
  3. Create IAM role with least privilege for Amazon EFS storage access.
  4. Create highly available Amazon EFS volume distributed across two Availability Zones.
  5. After that, we create the AWS Secrets Manager store with a sample user and the required parameters.
  6. Finally, we test our connection and upload a sample file.

Create the custom identity provider

We started by deploying the Custom Identity Provider. We followed the Transfer Family user guide to create a Lambda with AWS Secrets Manager via the official CloudFormation Stack template as our Custom Identity Provider.

*Hint* When using EFS storage you must provide the AWS Lambda function with a POSIX profile. In that event you will need to add a line to your Lambda code (see also in the troubleshoot guide).

    if 'PosixProfile' in resp_dict:
        resp_data["PosixProfile"] = json.loads(resp_dict['PosixProfile'])

Creating AWS Transfer Family Server

Once the Custom Identity Provider is configured, the next step is to set up the AWS Transfer Family FTP server. Here we select FTP as the protocol:

AWS Transfer Family FTP configuration part 1

Next we select Custom Identity Provider for the Identity Provider selection. Then select the Lambda function we created in the previous step, which will later be used to authenticate FTP users with AWS Secrets Manager.

AWS Transfer Family FTP configuration part 2

To ensure access takes place exclusively from the on premises network, we select the endpoint configuration VPC hosted. With this configuration, the FTP server receives IP addresses in the respective subnets of the VPC which is available via VPN to the on premises network. In our case we distributed between the availability zones eu-central-1a and eu-central-1b. It is important to note that the security group of the FTP server has to be configured to allow access from the on premises network.

AWS Transfer Family FTP configuration part 3

For our use case, we used Amazon EFS for storage because the SAP application servers on the AWS side must be connected via Network File System (NFS).

AWS Transfer Family FTP configuration part 4

If you choose Create a new role this will automatically create an AWSTransferLoggingAccess role, which is used to create the CloudWatch Logs for the AWS Transfer Family server, which is recommended to help troubleshoot in the event of any issues.

AWS Transfer Family FTP configuration part 5

Once you have successfully created a new role, you will see Successfully created.

AWS Transfer Family dashboard server view

After the server has been created, you can see which IP addresses it was allocated from the respective subnets. You can connect to the FTP server via these IPs. However, it is also possible to connect via the endpoint DNS names or to configure your own custom hostnames via Amazon Route 53.

AWS Transfer Family view on endpoint details - endpoint configuration

Create IAM role for Amazon EFS storage access

For users to connect to Amazon EFS, we need to create IAM roles.

The following is an example policy for the role granting user access to your EFS volume.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Sid": "UserFileSystemAccess",
 "Effect": "Allow",
 "Action": [
 "elasticfilesystem:ClientMount",
 "elasticfilesystem:ClientWrite"
 ],
 "Resource": "arn:aws:elasticfilesystem:region:account-id:file-system/filesystem-id"
 }
 ]
}

The following is an example policy for the role granting root access to your EFS volume.

{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Sid": "RootFileSystemAccess",
 "Effect": "Allow",
 "Action": [
 "elasticfilesystem:ClientRootAccess",
 "elasticfilesystem:ClientMount",
 "elasticfilesystem:ClientWrite"
 ],
 "Resource": "arn:aws:elasticfilesystem:region:account-id:file-system/filesystem-id"
 }
 ]
}

You need to give the role a trust relationship to the AWS Transfer Family service

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "transfer.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "account_id"
},
"ArnLike": {
"aws:SourceArn": "arn:aws:transfer:region:account_id:user/*"
}
}
}
]
}

Create Amazon EFS volume

To make our Amazon EFS volume highly available, we chose the standard storage class. This distributes the mount points across the assigned VPC subnets in the different Availability Zones.

Amazon EFS file system creation

You then create Amazon EFS access points using the AWS Management Console. Keep in mind you cannot modify an access point once it is created.

Amazon EFS access point creation and posix user configuration

Create AWS Secrets Manager store

To create FTP users, you need to create one AWS Secrets Manager store per user. Here you can define the FTP user password, the access role to the Amazon EFS storage, the home directory details of the user and the POSIX profile for the Amazon EFS access point connection (user or root).

The following parameters need to be provided as key/value pairs in order to be able to connect to the FTP server

  • Password
  • Role
  • HomeDirectoryDetails
  • PosixProfile

AWS Secrets Manager creation and configuration part1

The Secrets name should start with SFTP followed by the username (this logic is predefined in the Lambda function assuming you are using the official CloudFormation template. This can be adjusted as required).

AWS Secrets Manager creation and configuration part2

Upload/get file showcase

After all configurations are done, we can connect to the FTP server and execute all common FTP commands. In our example, we upload a file to our EFS file system and we see that it is recognized according to the defined user ID and group ID.

Note: For File Transfer Protocol (FTP) and FTPS, only Passive mode is supported.

If troubleshooting the FTP solution, we recommend reviewing CloudWatch logs for the Lambda function as well as the AWS Transfer Family server, which are very helpful in identifying the cause of the issues.

Conclusion

In this blog we covered how, with AWS Transfer Family, we were able to provide a simple and extensible solution to enable FTP for our customer’s SAP workloads running on AWS. Our customer was able to continue the migration using their FTP scripts and users (via custom identity provider) without any additional development effort. By restricting access to the FTP server to the on-premise network via VPN, we provided a secure solution that is not accessible from the public network. The distribution of the FTP servers and the EFS storage to several availability zones delivers a highly available FTP service for our customer. Nevertheless, using FTP in a public facing environment is not recommended. We suggested our customer switch to a more secure protocol like SFTP.

With a footprint in more than 20 countries, T-Systems is one of the world’s leading vendor independent providers of digital services headquartered in Europe. Contact us to discover how innovative technologies enable new business models that improve the lives of countless people.

Edmund Cheung

Edmund Cheung

Edmund is based in London and has over 30 years of experience with mission critical business systems running on diverse platforms from the centralised Mainframe to the present Private and Public Cloud. He worked as a Unix System Administrator and SAP Basis Specialist for over 16 years in the Oil and Gas sectors, then as a storage SME for 6 years in the telecommunication sector. Since 2015, Edmund has helped many customers to migrate their workload to the Private and Public Clouds.

Artur Schneider

Artur Schneider

Artur Schneider was born in 1989 and lives in the south of Germany near Ulm. In addition to his education as an IT specialist for system integration, he is also an educated bank clerk. He started his IT career as a system engineer in Microsoft environments. He specialized amongst else in virtualization, backup and monitoring of infrastructures. Since 2016 he has been involved in cloud topics, these included migrations of complete infrastructures to cloud platforms and automation of new services for migration to cloud platforms. In addition, he had a leading role in building a cloud automation team focused on the AWS cloud platform. Since then, he has been working in numerous AWS projects as Senior Cloud Consultant for enterprise customers.