AWS Database Blog

How to archive data from relational databases to Amazon Glacier using AWS DMS

March 2023: This post was reviewed and updated with architecture diagrams and console screenshots.

You can use AWS Database Migration Service (AWS DMS) to migrate data from various sources to most widely used commercial and open-source databases. AWS DMS also supports Amazon S3 as a source and as a target for migrations. When you use Amazon S3 as a target, you can use AWS DMS to extract information from any database that is supported by AWS DMS. You can then write it to Amazon S3 in CSV format, which can be used by almost any application.

You can use this feature in the following two scenarios:

  1. As part of the migration from commercial engines such as Oracle to Amazon RDS for PostgreSQL, you can migrate infrequently accessed data or historical audit and archive data into AWS storage services like Amazon S3 or Amazon Glacier. Doing this can help you meet governance and compliance needs for long-term retention of data.
  2. As the data size grows in relational databases, database administrators (DBAs) and developers spend more time monitoring and reorganizing indexes and partitions to maintain optimal performance. This additional maintenance leads to increased operational expenses for storing the data due to redundancy and backup. It’s a best practice to archive infrequently accessed data in Amazon S3 or Amazon Glacier for better cost savings and manageability.

In this post, I talk about the second use case, where infrequently accessed data—for example, sales information that is older than a few months or years (often referred as cold data)—is archived in a secure vault for a long duration. The finance or information security teams should be able to access this data when required, although such requests might be sporadic.

If you compare the cost of per GB storage per month for Amazon RDS PostgreSQL Multi-AZ GP2 with that of Amazon Glacier in the us-east-1 Region, the  Amazon Glacier storage cost is about 57 times cheaper than that of Amazon RDS. For example, the cost of 100 GB of storage per month in Amazon Glacier is under a dollar ($0.10), compared to Amazon RDS Multi-AZ storage, which is $23.00 (at the time of this writing).

In this post, I show you how to use AWS DMS to store the data from Amazon RDS PostgreSQL in Amazon S3 as a CSV file, and then archive the CSV file in Amazon Glacier. Amazon Glacier provides multiple retrieval options that allow you to retrieve the data in a few minutes to several hours. I then use the Amazon Glacier Select feature to query a subset of the data from the archive. With Amazon Glacier Select, you can perform analysis on your data without staging it in a hotter storage tier like Amazon S3. This makes it cheaper, faster, and easier to gather insights from your cold data in Amazon Glacier.

High-level steps

To help you manage all the infrastructure for this post, we created an AWS CloudFormation template that provisions all the required resources and the AWS Lambda code used in the post. You can find the code in this GitHub repository. The solution that is demonstrated in this post includes the following tasks:

  1. Download the public data set containing UK property registration information from https://data.gov.uk.
  2. Create an Amazon Glacier vault for storing the archive.
  3. Launch an AWS CloudFormation template that creates the necessary resources. These include the virtual private cloud (VPC), Amazon RDS PostgreSQL instance, AWS DMS replication instance, S3 bucket, IAM policies, and other resources that are required for this example.
  4. Create the folders in Amazon S3.
  5. Connect to the RDS instance, and load the dataset using the PostgreSQL client.
  6. Create an AWS Lambda function, and set up an S3 event notification to trigger the Lambda function. This function downloads the file from S3 and uploads it to Amazon Glacier as soon as the CSV file is created by AWS DMS.
  7. Configure the source and target endpoints for AWS DMS.
  8. Set up an AWS DMS task to extract the data from the RDS instance to Amazon S3.
  9. Create a Lambda function to retrieve the subset of data from the archive using Amazon Glacier Select.
  10. View the archive results in the S3 bucket.
  11. Clean up the AWS resources.

Note: To illustrate this architecture, the example uses serverless computing. I use an AWS Lambda function to download CSV files that are created by AWS DMS and upload it to Amazon Glacier. The AWS Lambda has local disk capacity of up to 10240 MB, and hence this solution assumes that the archives are under this limit. For archives that are larger than 10240 MB, consider attaching an Amazon Elastic File System (EFS) to your Lambda function.

Prerequisites and assumptions

To complete this solution, you should have the following:

  1. An AWS account that provides access to the services shown in the preceding diagram. The steps are performed in the us-east-1 (N. Virginia) Region. Before you start, ensure that the services that are used in this post are available in your AWS Region.
  2. A tool that connects to a PostgreSQL database, such as psql or pgAdmin.
  3. Working knowledge of Amazon RDS, AWS DMS, Amazon VPC, Amazon S3, and AWS Lambda.

Detailed steps

The following sections walk you through the steps that are required to create and configure the solution.

1. Download the sample dataset

Download this UK land registry file, which contains a decade of property registration information.

Note: This file contains HM Land Registry data Crown copyright and database copyright 2018. This data is licensed under the Open Government Licence v3.0. The data provided in this post includes address data that has usage restrictions, including personal and non-commercial use. For more information about the permitted use of this dataset, see Price Paid Data on the Gov.uk site.

You can save the file to any Amazon EC2 instance (preferred) or to your local computer that has the PostgreSQL client installed.

At the time of this writing, this dataset contains UK property registration information from 1995 to 2017, with approximately 22 million records. We assume that you need to extract old data on a yearly basis (that is, partitions by year) and archive it in Amazon Glacier. After successfully storing it in Amazon Glacier, you can delete those records from Amazon RDS.

2. Create the Amazon Glacier vault

  1. Sign in to the AWS Management Console, and search for S3 Glacier, or click on the Amazon Glacier console. Create a vault with the name rdsarchive in the us-east-1 Region. Under Event notifications area, choose the option Turn off notifications, and then click on Create vault.
  2. Note the name and Amazon Resource Name (ARN) of the vault.

3. Launch the AWS CloudFormation template

  1. Copy or download the AWS CloudFormation JSON file from this GitHub repository.
  2. Go to the AWS CloudFormation console, and choose Create Stack.
  3. Launch a template by uploading the JSON file from Step 1.
  4. Specify the mandatory parameter values:
    1. Type a stack name, for example, dms-archive.
    2. Provide a unique S3 bucket name. In my case, I have used the name rdspgtoglacier.
    3. For the VaultName parameter, it should be the same as the one in Step 2.
    4. For ExistsDMSRole, specify Y if dms-vpc-role is already created in your AWS account. Otherwise, specify N so that it can be created with this CloudFormation stack. For more information about setting up this role, see Creating the IAM Roles to Use with the AWS CLI and AWS DMS API.
  5. The rest of the parameter values are optional and can be left as the default settings.
  6. Choose Next and navigate to the last page.
  7. On the last page, under Capabilities, select the option: I acknowledge that AWS CloudFormation might create IAM resources with custom names.
  8. Choose Submit.

Some resources, such as the S3 bucket name, must be unique across all existing bucket names in Amazon S3. (In this post, I use the bucket name rdspgtoglacier.) We recommend that you restrict the access to the RDS instance for your specific IP range (by default, it allows access to 0.0.0.0/0.). The AWS CloudFormation stack creation takes about 10–15 minutes and create the following resources.

  1. An Amazon S3 bucket with the name as provided.
  2. An AWS DMS replication instance named rds-to-s3.
  3. An IAM role for Lambda execution named rdsarchive-lambda-execution. Add the managed policy AWSLambdaBasicExecutionRole to the role, so that the Lambda function can create and access Amazon CloudWatch Logs.
  4. An (IAM) policynamed lambda-to-archive-policy to provide access to the S3 bucket and the Amazon Glacier vault that is created in Step 2, and add it to the rdsarchive-lambda-execution
  5. An IAM role named dms-vpc-role for managing VPC settings for AWS managed customer configurations. Add the managed policy AmazonDMSVPCManagementRole to the role if it is not created already.
  6. An IAM policy dmsvpcs3policy (to PUT and DELETE objects from the bucket created in Step 1). Add it to the dms-vpc-role
  7. An RDS instance with identifier archivedb, database name ukdb, user name dbadmin, and default password postgres123.
  8. A VPC for the Amazon RDS PostgreSQL and AWS DMS instances, and a security group to allow access to the RDS PostgreSQL instance from your IP range.

The AWS resources provisioned are assigned with default names in the template. If you provide different names while creating the stack, use those names in the rest of the steps. You can note the resource names, RDS endpoint names, S3 bucket name, and so on from the Outputs section of your stack execution.

4. Create the Amazon S3 folders

After creating the stack, go to the Amazon S3 console and locate the bucket rdstoglacier. Create the following two folders under the bucket:

  1. The ukregistry folder, which acts as a target for AWS DMS.
  2. The output folder, which acts as a target for Amazon Glacier Select to retrieve the data from the archive.

5. Load the data into the Amazon RDS DB instance

  1. Using psql or pgAdmin, connect to the RDS DB instance as follows (assuming that you are using the default database name ukdb and the user name dbadmin):
    psql -h <RDS DNS Endpoint>  -d ukdb -U dbadmin  -p 5432
  2. Create the database table, and load the UK dataset, as shown following:
    Note: Another option is to download the dataset to an EC2 instance and load it from there via psql installed on that EC2 instance.

      CREATE TABLE uk_property_registry(
      transaction uuid,
      price numeric,
      transfer_date date,
      postcode text,
      property_type char(1),
      newly_built boolean,
      duration char(1),
      paon text,
      saon text,
      street text,
      locality text,
      city text,
      district text,
      county text,
      ppd_category_type char(1),
      record_status char(1));
      
    # Load the data from the downloaded CSV file to the table. Provide the complete path of the CSV file.
    
    \copy uk_property_registry FROM '<dataset_file_complete_path>' with (format csv, header false, null '', quote '"', force_null (postcode, saon, paon, street, locality, city, district));

After a successful load, you will see an output like COPY 27910954, or you can run the query select count(*) from uk_property_registry; to verify that the load is successful.

6. Create a Lambda function to download the file from S3 and upload it to the Amazon Glacier vault

  1. On the AWS Lambda console, create the archivetoglacier Lambda function as follows:
    1. Click on Use a blueprint and select the option Get S3 object with the Python 3.7 runtime.
    2. For the function name, type archivetoglacier.
    3. Click on Use an existing role and select the rdsarchive-lambda-execution role for this function.
    4. Under the S3 trigger, choose the bucket rdspgtoglacier.
    5. For the event type, choose All object create events.
    6. Specify ukregistry/ as the prefix.
    7. Click on the acknowledgment message that appears on the bottom of the screen.
    8. Create the function.
    9. After it is created, edit the function.
    10. Replace all the code in the window with the Python code from the GitHub repository.
    11. Click on Deploy
  2. Click on the Configuration tab that appear above the function code.
  3. In the General configuration section, click Edit and increase the Lambda timeout to 5 minutes and click save.
  4. In the Environment variables section, specify a new variable.
    1. Key: vaultname
    2. Value: rdsarchive
  5. Click to save.

Save the changes.

The preceding code fetches the S3 object, downloads it to a Lambda instance, and uploads it to the Amazon Glacier vault. This function takes the Amazon Glacier vault name through the environment variables.

7. Set up the source and target endpoints for AWS DMS

  1. Open the AWS DMS console, and choose Endpoints in the navigation pane. Create a source endpoint for the PostgreSQL database using the RDS instance that you created in Step3, as shown following:
    Make sure that the database name, port, and user information are correct.
  2. Choose Run testand test the endpoint. If the test succeeds, be sure to save the endpoint. If the test fails, verify the endpoint and security group configurations.
  3. Choose Refresh schemas after successful testing.
  4. Create a target endpoint for the S3 connection, as shown following:

    1. Specify the role ARN of dms-vpc-role as arn:aws:iam::<AWS Account>:role/dms-vpc-role.
    2. Specify the Bucket name and Bucket folder from Step 4.
    3. Choose Run testand test the endpoint. If the test succeeds, be sure to save the endpoint. If the test fails, verify the role configuration and the S3 bucket name.

8. Set up a DMS task to extract the data from the RDS instance to S3

  1. Open the AWS DMS console. Choose Database migration tasks in the navigation pane, and then choose Create task.
  2. For the task identifier, use archivetos3.
  3. Use the replication instance that was created via AWS CloudFormation and the endpoints that you created in Step 7.
  4. For Migration type, choose Migrate existing data.

  5. Accept the default values in the Task Settings.
  6. In the Table mappings section, click on Add new selection rule.
    1. On schema, choose Enter a schema
      1. Source name: public.
      2. Source table name: uk_property_registry.
  7. AWS DMS provides an option to add column filters, which enable you to extract the data that meets your archive criteria. In this example, you can specify the transfer_date column to extract the data for a particular year—for example, transfer_date between 1995-01-01 and 1996-01-01, as shown following. Choose Add selection rule.
  8. Choose Create task. The migration task then runs because the Start task on create box is selected. The task status changes from Creating to Starting, then to Running, and finally to Load Complete.
  9. You can verify that AWS DMS extracts the subset of information from the table as a CSV file and stores it in the S3 path that you specified.
  10. Examine the contents of the S3 bucket:
    1. Go to the Amazon S3 console, and choose the rdspgtoglacier bucket.
    2. Choose the key (folder) named ukregistry. Choose the key public, which is the schema name. Choose uk_property_registry, which is the table name.
    3. You will see the object LOAD00000001.csv file, which contains the data <= 1996-01-01. You can download this file and examine it with a text editor if you want.
  11. Check whether the Lambda function was invoked for the upload process.
    1. Go to the AWS Lambda console and choose the archivetoglacier
    2. Choose the Monitor tab, and check to see whether the function was invoked as the S3 trigger is configured on the bucket.
    3. Click on the three dots, and go to View logs.
    4. Click on log group displayed below.
    5. Note the archiveid (value of u'archiveId') of the Amazon Glacier upload operation from the log. You need this for the next step.

9. Create a Lambda function to retrieve the subset of the data

Now, let’s assume that you need to identify the registrations that are done with price > 500,000 in year 1995 from the archive. With Amazon Glacier Select, you can extract the subset of data from the archive using a SQL query. Doing this not only reduces the total time involved in querying, but it also simplifies the way that you can access the archive data. It is important to understand the structure of the archived CSV file, and the SQL operations and data types that are supported in the Amazon Glacier Select feature.

For this example, you will use the following SQL expression to retrieve the data. The second column (s._2) is the price attribute in the table and the CSV file that was uploaded as the archive. You use the CAST function to convert this to decimal because Amazon Glacier, by default, assumes these attributes as a string. Although Amazon Glacier supports several retrieval options, you use the Standard option for this post. Amazon Glacier Select stores the output in an S3 bucket specified in the retrieval job.

select s._1,s._2,s._3,s._4,s._5,s._12,s._13 from archive s  where CAST(s._2 AS DECIMAL) >= 500000

You use a Lambda function to achieve this.

  1. Create a Lambda function as follows:
    1. In the AWS Lambda console, choose Create function.
    2. Choose the Author from scratch
    3. Type glacierselect for the function name.
    4. Choose the Python 3.9
    5. Under Permissions, click on Change default execution role
      1. Choose the existing role rdsarchive-lambda-execution for this function.
    6. Create the function.
    7. After it is created, edit the function.
    8. Replace all the code in the window with the Python code from the GitHub repository and click on Deploy.
    9. Click on the Configuration tab that appear above the function code.
      1. In the General configuration section, click Edit and increase the Lambda timeout to 5 minutes and click save.
    10. In the Environment variables section, add the following variables. It is important that you provide the correct values, such as bucketname, the prefix from Step 4, and the archiveid from Step 8.
  1. Invoke the Lambda function glacierselect manually using a command line, or test it using a dummy “Hello World” test event, as shown following.
  2. Test the Lambda function glacierselect using the test event that you created in Step 2. You can see the output in the Lambda console, as shown following. Alternatively, you can also view the information in the Lambda logs.

10. View the archive results in the S3 bucket

After successful invocation, it may take a few hours to get the output in the S3 folder (output/skaxxxx/results) because you used the Standard retrieval option for Amazon Glacier. If you need to view the results faster, you might consider the Expedited option. In my case, it took 4 hours for the results be available.

Also, you might consider the Bulk retrieval option for downloading the large volume of data in a cost-effective manner. You can download the result file and analyze it using text editor. Alternatively, you can re-upload the data into an Amazon RDS table using AWS DMS with S3 as a source.

The following is sample data with a property registration value > 500000 retrieved from Amazon Glacier using Amazon Glacier Select:

11. Clean up all the resources

  1. Remove the UK public data set from your EC2/ local desktop because it contains data with usage restrictions.
  2. Delete the folders created in the S3 bucket
  3. Delete the AWS Lambda functions created
  4. Delete the AWS CloudFormation stack that you created in Step 3. Doing this step removes all the resources used in this post.
  5. Delete the Amazon Glacier vault.

Other suggestions:

  1. You can extend this architecture to include Amazon DynamoDB and build a complete archiving solution. You can store the Amazon Glacier archiveid in DynamoDB with a meaningful description (for example: Table: rdsarchive with partition_key= archive_criteria_id(uk_registry_1996), Attribute1 (archiveid=xxxx), Attribute2 (date_created=xxxx) etc.) for indexing. You can then programmatically retrieve it as needed.
  2. If you need to query the archive data more frequently, you can use AWS DMS to archive it to S3 and query via Amazon S3 Select or Amazon Athena.
  3. You can automate the archival process by scheduling AWS DMS tasks to run periodically.

Conclusion

Managing ever-growing data is a major challenge for many enterprises. This post explained how you can use AWS DMS, Amazon Glacier, AWS Lambda, and Amazon S3 to handle infrequently accessed data that is stored in relational databases. The Amazon Glacier Select feature helps you save costs and improve performance by reducing the targeted dataset that you use while querying the data archives. This combination of services helps you manage your data growth while balancing cost and performance.


About the Authors

Gowri Balasubramanian is a senior solutions architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on both relational and NoSQL database services, helping them improve the value of their solutions when using AWS.

Ganesh Shankaran is a senior solutions architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance to help them improve the value of their solutions when using AWS.

Marcos Freccia is a Sr. Consultant on Database Migrations with the AWS Professional Services Team. He has been supporting and enabling customers on their journey to migrate and modernize their database solutions from on-premises Data Centers to AWS.