AWS Storage Blog

Integrate an SAP ASE database to Amazon S3 using AWS Storage Gateway

Customers running workloads on an SAP Adaptive Server Enterprise (ASE) database (previously known as Sybase ASE) regularly take database backups to meet their disaster recovery (DR) and data protection compliance. Critical workloads often require a longer database backup retention period due to regulatory requirements. Storing these database backup files on-premises involves higher capacity, increased cost, and lack of scalability and durability.

You can avoid the aforementioned challenges by backing up on-premises SAP ASE database backup files directly on AWS. You can integrate your SAP ASE database with Amazon Simple Storage Service (S3) using AWS Storage Gateway, which provides cost-effective, scalable, and secure storage. Storage Gateway is a hybrid cloud storage service that enables cost-effective access to virtually unlimited cloud storage on Amazon S3 for on-premises environments.

If you’re planning to migrate your SAP ASE database to the AWS Cloud or are in the middle of your cloud journey, you can use Amazon S3 for your SAP ASE database migration. Additionally, storing SAP ASE database backup files on Amazon S3 helps you easily migrate your SAP ASE database to Amazon Elastic Compute Cloud (EC2). You can also load your feed files stored on Amazon S3 to your on-premises SAP ASE database.

In this post, we explain how to integrate your SAP ASE database to Amazon S3 using Storage Gateway for the following use cases:

  • Backing up your SAP ASE database directly to Amazon S3
  • Restoring your SAP ASE on-premises or Amazon EC2-hosted database from Amazon S3
  • Load delimited flat files from Amazon S3 to on-premises SAP ASE or Amazon EC2 using the native bcp utility

Solution overview

The following diagram illustrates the solution architecture. The source database is the on-premises SAP ASE database, and the target database is the SAP ASE database on Amazon EC2. We use Storage Gateway hosted on Amazon EC2 to integrate the SAP ASE database to Amazon S3. To connect the on-premises data center to AWS, you can use AWS Direct Connect, the public internet, or AWS Site-to-Site VPN.

Integrate an SAP ASE database to Amazon S3 using AWS Storage Gateway

For this post, we use an SAP ASE database running on Amazon EC2, but the same setup works for an on-premises SAP ASE database.

Prerequisites

To demonstrate this solution, the following prerequisites are required:

  • An AWS account
  • An Amazon S3 bucket to store SAP ASE backup or application feed files
  • An SAP ASE database and backup server instance with a user database
  • An SAP ASE database login with a sysadmin role to dump and load backup files

Walkthrough

To integrate your SAP ASE database with Amazon S3 using Storage Gateway, complete the following high-level steps:

  1. Create an Amazon S3 File Gateway using AWS Storage Gateway.
  2. Create an NFS file share and connect it to Amazon S3.
  3. Mount the NFS file share to an Amazon EC2-hosted SAP ASE database.

1. Create Amazon S3 File Gateway using AWS Storage Gateway

We use Amazon S3 File Gateway to store and retrieve objects in Amazon S3 with industry-standard file protocols such as NFS and Server Message Block (SMB). For instructions on how to create Amazon S3 File Gateway, refer to the Amazon S3 File Gateway User Guide.

Storage Gateway maintains a local cache of up to 64 TB, providing low-latency access to recently accessed data and transfers data to Amazon S3 bucket asynchronously in the background.

To create a gateway on-premises, you can deploy a gateway Virtual Machine (VM) in your local VMware, Hyper-V, or Linux KVM virtual environment, or alternatively, you can purchase a dedicated physical hardware appliance. If you host a gateway on an Amazon EC2 instance, you launch an Amazon Machine Image (AMI) that contains the gateway VM image and then activate the gateway.

For our example, Amazon S3 File Gateway is hosted on Amazon EC2.

2. Create an NFS file share and connect it to Amazon S3

Storage Gateway uses an Amazon S3 bucket created as prerequisites, to store and retrieve files.

An NFS files share connects to your Amazon S3 bucket directly using the bucket name or the Amazon S3 access point name.

For instructions on creating an NFS file share connected to your Amazon S3 bucket, refer to the documentation on  creating an NFS file share.

You should restrict access of the NFS file share to allowed clients in the file share access settings. Use Amazon S3 bucket policies to restrict access to the S3 bucket. You can set Amazon S3 Lifecycle policies to move rarely accessed files to a lower storage class to reduce cost.

In this case, we will allow access from our Amazon EC2 hosted SAP ASE database server to NFS file share, as that will be the NFS client connecting to the S3 File Gateway.

If you need files uploaded to Amazon S3 directly to be available on your NFS file share, set the file share cache refresh interval to update the gateway periodically. Without this setting, you need to refresh the cache manually using the AWS Management Console or the Amazon S3 CLI.

The following screenshot shows our example NFS file share in the Amazon S3 bucket sap-ase-share.

The following screenshot shows our example NFS file share in the Amazon S3 bucket sap-ase-share

3. Mount the NFS file share to Amazon EC2-hosted SAP ASE database

For this post, our SAP ASE database is hosted on Amazon EC2. Create a mount point on a database host and mount the NFS file share. For instructions, refer to the documentation on mounting your NFS file share on your client.

In the following screenshot, we mount the NFS file share sap-ase-share on the database host at the mount point /root/test_ase_feed.

In the following screenshot, we mount the NFS file share sap-ase-share on the database host at the mount point _root_test_ase_feed

Now we have an NFS file share mounted on Amazon EC2 where the SAP ASE database is hosted. This integrates SAP ASE on Amazon EC2 to Amazon S3.

Use cases

In this section, we show how to use this NFS file share in various use cases related to an SAP ASE database.

Use case 1: Back up the SAP ASE database directly to Amazon S3

You can use the NFS file share that is mounted on the SAP ASE database server to take backups using the dump database command.

Log in to the SAP ASE database server using isql or any supported SQL editor tool. The following command uses isql:

isql -U<login_name> -S<server_name> -P<password> -w 1024 -s "|"

After you successfully log in, create an SAP ASE database dump configuration with the backup file directory (@stripe_dir) as the NFS share path. Back up the SAP ASE database using the dump configuration.

The following command takes a backup of a user database (pubs2) to Amazon S3. You can use compression and verify the parameters as required, otherwise it’s disabled by default.

--add database dump configuration
sp_config_dump @config_name='db_bkp',
               @stripe_dir='/root/test_ase_feed',
               @compression='101', 
               @verify='header'
go
--dump database pubs2 using configuration db_bkp 
dump database pubs2 using config = db_bkp
go

The following screenshot shows that the SAP ASE database backup is created and stored at the NFS file share /root/test_ase_feed/.

The following screenshot shows that the SAP ASE database backup is created and stored at the NFS file share _root_test_ase_feed_

The following screenshot shows that the backup file created on the NFS file share (/root/test_ase_feed/) is available in the Amazon S3 bucket (sap-ase-share).

The following screenshot shows that the backup file created on the NFS file share (_root_test_ase_feed_) is available in the Amazon S3 bucket (sap-ase-share)

The backup files take time to transfer from the local cache of Amazon S3 File Gateway to the Amazon S3 bucket, depending on the size of the backup file and network bandwidth between Amazon S3 File Gateway and AWS. You should consider keeping local cache size greater than the largest full backup size for sufficient buffer. File Gateway transfers backup file from database server to the Amazon S3 bucket asynchronously in the background. Data transfer is further optimized by using multipart parallel uploads.

Some of the best practices for boosting the performance of Storage Gateway:

  • Consider using multiple cache disks rather than one large-size disk. For example, if you have 4 TB of cache disk, use four disks of 1-TB size each instead of single 4-TB disk.
  • Right size the gateway VM for optimal performance. Default configuration is 4 vCPU, 16-GB RAM. If your backup size is greater than a couple of TB per day, then consider doubling the VM resources, to 8 vCPU, 32-GB RAM.

Use case 2: Restore your SAP ASE on-premises or Amazon EC2-hosted database from Amazon S3

You can now restore the SAP ASE database from the backup file stored on Amazon S3. You can restore the database on the same database instance or on a separate database instance.

In this example, we create the empty database pubs2_restored to load the dump file from the backup file on Amazon S3.

The following SQL command creates the database on SAP ASE:

create database <database_name>
on <database_device> = <size>
go

To restore the backup file, use the load database command:

load database <database_name> 
from "<backup file>"
go

The following screenshot shows that the database pubs2_restored is restored from the backup file stored in Amazon S3.

The following screenshot shows that the database pubs2_restored is restored from the backup file stored in Amazon S3

After you restore the database, bring the database online and check if the database contains any user tables from the backup file.

The following screenshot shows that the database is restored with data from the backup file on Amazon S3.

The following screenshot shows that the database is restored with data from the backup file on Amazon S3

Use case 3: Load delimited flat files from Amazon S3 to your SAP ASE database

Applications that run on a hybrid-cloud environment, sometimes required to load feed files from Amazon S3 to an on-premises SAP ASE database to support application functionality.

You can use the SAP ASE bcp utility to load delimited flat files stored on Amazon S3 to your SAP ASE database.

SAP ASE bcp utility is native command to copy database table to or from files in the user specified format.

We use SAP ASE on Amazon EC2 to demonstrate this use case.

To load the flat file, complete the following steps:

  1. Log in to the SAP ASE database and create the table to load flat files using the following SQL command:
create table MemberTb (id int,
                       name varchar(10),
                       status int )

The table is created in the database, as seen in following screenshot, with zero rows.

The table is created in the database, as seen in following screenshot, with zero rows

  1. Create a sample delimited flat file (TestFeed.dat) with the following content and upload it to the Amazon S3 bucket:
1|John|67
2|Brian|23
3|Victor|10
7|Michael|13

This file is now available in the NFS share after the cache refresh interval. If you haven’t enabled automatic cache refresh, then you must explicitly refresh the NFS share cache using the Amazon S3 console or Amazon S3 CLI.

The following screenshot shows our example NFS file share in the Amazon S3 bucket sap-ase-share

  1. After the flat file is available in the NFS file share, use the SAP ASE bcp command to load this file to the SAP ASE database table.

You can run bcp to load files to the database from any host where the NFS share is set up and the bcp utility is installed.

Use the following code to run bcp:

bcp <table_name> in <delimited_file> -U <database_login> 
-S <sap_ase_server> -P <password> -c -t "<delimiter>" -Y

The following screenshot shows the output of the bcp command, where the TestFeed.dat file from the Amazon S3 bucket sap-ase-share is loaded to the database table successfully.

The following screenshot shows the output of the bcp command, where the TestFeed.dat file from the Amazon S3 bucket sap-ase-share is loaded to the database table successfully

  1. Log in to the database and check the data in the table where the file is loaded.

The following screenshot shows that table has the same data as the flat file stored in Amazon S3.

The following screenshot shows that table has the same data as the flat file stored in Amazon S3

Cleaning up

To remove all services and components created as a part of this solution to avoid incurring charges, complete the following steps:

  1. Sign in to the AWS Management Console.
  2. Choose the Region where your gateway is hosted.
  3. On the Storage Gateway console, choose Gateway.
  4. Select the gateway to be deleted and on the Actions menu, choose Delete gateway.
  5. On the Amazon EC2 console, select the instance that hosts Storage Gateway and the SAP ASE database.
  6. On the Instance state menu, choose Terminate instance.
  7. On the Amazon S3 console, select the S3 bucket, delete its objects, and then delete the bucket.

Conclusion

In this post, we showed how to integrate an SAP ASE database with Amazon S3. We also demonstrated how to store a database backup to Amazon S3 from an on-premises or Amazon EC2-hosted SAP ASE database, load the SAP ASE database from the backup file in Amazon S3, and load a delimited flat file from Amazon S3 to the SAP ASE database.

With the use cases described in this post, you can achieve cost-effective, durable, and virtually unlimited storage for your SAP ASE database backup and feed files.

This helps increase your retention period and provides flexibility to store the backup on Amazon S3 across Regions as a disaster recovery solution.

Thank you for reading this post. If you have any comments or feedback, leave them in the comments section.