AWS Database Blog

Export Amazon SimpleDB domain data to Amazon S3

As AWS continues to evolve its services to better align with customer needs and modern workloads, we’re excited to introduce a new export functionality for Amazon SimpleDB . By using this feature, you can export domain data to Amazon Simple Storage Service (Amazon S3) in JSON format, unlocking new opportunities for long-term storage, and migration to purpose-built databases.

This launch supports our continued focus on giving customers reliable access to their data and supporting downstream use in other systems and services. Now, you can use this export capability to extract domain data into Amazon S3 for retention or further processing. The export generates a complete JSON representation of Amazon SimpleDB data. By making SimpleDB data available in S3, this feature provides a practical starting point for archiving or migration planning.

In this post, we walk you through how to use the new export functionality, highlight best practices, and share monitoring functionality to help you make the most of it. Let’s get started.

Overview of Amazon SimpleDB and Amazon S3

Amazon SimpleDB is a data service for running queries on NoSQL data store in real time. It works alongside services such as Amazon S3 and Amazon Elastic Compute Cloud (Amazon EC2), helping developers build applications that store, process, and query data at cloud scale. These services are designed to make web-scale computing easier and more cost-effective for developers.

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data.The following diagram illustrates the solution architecture.

Prerequisites

Before you begin exporting data from Amazon SimpleDB to Amazon S3, there are a few prerequisites to complete for a successful and secure export process. This section outlines the resources, permissions, and configurations required to get started.

  • SimpleDB domain: The export functionality operates at the domain level, so a SimpleDB domain must exist before you can initiate an export.
  • Amazon S3 bucket: Create or designate an existing S3 bucket as the destination for the exported data. While it’s generally recommended to use a bucket in the same AWS Region for performance and cost optimisation, the export functionality does support cross-Region exports in the commercial partition. This gives you flexibility if your data needs to reside in a centralised or regulatory-compliant Region. You can also designate S3 buckets belonging to other AWS accounts if you have the appropriate permissions.
  • IAM permissions: The IAM role or user using the export functionality must have the following minimum permissions:
    • SimpleDB permissionssdb:GetExport, sdb:StartDomainExport, sdb:ListExports
    • S3 permissionss3:PutObjects3:ListObject, s3:HeadBucket, s3:ListBucket
  • AWS KMS permissions: If you want to encrypt exported data in S3 using AWS KMS:
    • AWS KMS permissionskms:GenerateDataKey
  • New version of AWS CLI or SDK with SimpleDBv2: SimpleDB is not available in the AWS Management Console. The new SimpleDB operations to initiate and manage the exports are only available through the SimpleDBv2 service in newer versions of AWS SDKs and CLI.The SimpleDBv2 service supports export related operations exclusively. For all other SimpleDB operations (for example, Select, PutAttributes), continue using the existing SimpleDB service in AWS CLI and SDKs.

Export data from Amazon SimpleDB to Amazon S3

The following steps walk you through exporting your SimpleDB domain data to Amazon S3 using the AWS CLI.

Step 1: Create or identify the destination S3 bucket

Amazon S3 provides durable, cost-effective storage, making it an ideal destination for SimpleDB domain data. Start by creating or identifying the S3 bucket where your exported data will be stored. If you already have a bucket that fits your needs, you can reuse it. Otherwise, create a new bucket using the AWS CLI:

aws s3 mb s3://<my-simpledb-export-bucket> --region <us-east-1>

You can select a bucket in the same AWS Region as your SimpleDB domain for performance efficiency. However, cross-Region buckets are supported, which gives you flexibility if you want to centralize data or meet compliance requirements. Apply optional security measures such as bucket policies, default server-side encryption (SSE-S3, SSE-KMS), or versioning.

Step 2: Configure IAM permissions

Create an IAM policy with the least-restricted privilege to the resources like following and attach to the IAM role or user being used to create the export:

S3ExportAccessPolicy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSimpleDBStartDomainExportAction",
            "Effect": "Allow",
            "Action": [
                "sdb:StartDomainExport",
                "sdb:GetExport",
                "sdb:ListExports"
            ],
            "Resource": [
                "arn:aws:sdb:{REGION}:{ACCOUNT_ID}:domain/{DOMAIN-NAME}",
                "arn:aws:sdb:{REGION}:{ACCOUNT_ID}:domain/{DOMAIN-NAME}/*"
            ]
        },
        {
            "Sid": "AllowWritesToS3Bucket",
            "Effect": "Allow",
            "Action": [
                "s3:ListObjects",
                "s3:PutObject",
                "s3:HeadBucket",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::{BUCKET_NAME}",
                "arn:aws:s3:::{BUCKET_NAME}/*"
            ]
        }
    ]
}

An additional policy is required if you want to use KMS encryption for the exported data in S3:

KMSAccessPolicy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowKMSKeyUsageForSimpleDBExport",
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:{REGION}:{ACCOUNT_ID}:key/{KEY_ID}"
        }
    ]
}

Replace REGION, ACCOUNT_ID, DOMAIN_NAME, BUCKET_NAME and KEY_ID with your specific values.

Step 3: Initiate the export

With the S3 bucket and IAM role in place, you are ready to start the export. Initiate the export by invoking StartDomainExport API using AWS SDKs or CLI.

aws simpledbv2 start-domain-export \
--domain-name '<name of domain>' \
--s3-bucket '<name of bucket>' \
--s3-bucket-owner '<account owning the bucket>'

A sample execution and response :

aws simpledbv2 start-domain-export \
--domain-name 'testDomain' \
--s3-bucket 'cellbucket' \
--s3-bucket-owner '111122223333'
 
{
    "clientToken": "ad9ac782-954a-45d1-8d47-8ef843c0ffe2",
    "exportArn": "arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01",
    "requestedAt": "2026-02-06T11:57:09.953000+00:00"
}

When the command returns a response similar to the example above, the export request has been successfully submitted. The export then proceeds asynchronously in the background.

Note: You may provide additional options for using a custom prefix for the S3 object keys of export artefacts or a different S3 SSE algorithm. For more information, see Export Considerations.

Step4: Monitoring the status of export

The GetExport operation can be used to get the details for the given export. Make a call with a command like following:

aws simpledbv2 get-export \
--export-arn '<export-arn returned from StartDomainExport call>'

Initially the export will be in PENDING status:

aws simpledbv2 get-export \
--export-arn 'arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01'  

#Example Output: 

{
  "exportArn": "arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01",
  "clientToken": "9bf97aef-660f-41ed-b27b-97d6903227e8",
  "exportStatus": "",
  "domainName": "",
  "requestedAt": "2026-02-06T11:57:09.953000+00:00",
  "s3Bucket": "",
  "s3BucketOwner": "111122223333",
  "exportDataCutoffTime": "2026-02-06T11:57:09.953000+00:00"
}

After some time, the export job will begin and exportStatus will transition to IN_PROGRESS:

aws simpledbv2 get-export \
--export-arn 'arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01'

#Example Output:

{
  "exportArn": "arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01",
  "clientToken": "9bf97aef-660f-41ed-b27b-97d6903227e8",
  "exportStatus": "PENDING",
  "domainName": "testDomainDoc",
  "requestedAt": "2026-02-06T11:57:09.953000+00:00",
  "s3Bucket": "test-export-doc-bucket",
  "s3BucketOwner": "111122223333",
  "exportDataCutoffTime": "2026-02-06T11:57:09.953000+00:00"
}

You may see the exported data getting written to the provided S3 bucket now. The data will be written to a path like AWSSimpleDB/<exportId>/<domainName>/. For the example shown here, data will be written at the following path:

AWSSimpleDB/3eb4eaed-872b-4e08-b4b6-ff6999a83e01/testDomain/

Initially an empty file named _started will be written. This file verifies that the destination bucket is writable. It can safely be deleted. Followed by writing of export data files (JSON) in with names like dataFile<randomPartitionIds> in data/ directory.

Once the job transitions to IN_PROGRESS, the export processing continues until completion, at which point exportStatus changes to SUCCEEDED, as shown in the following example:

aws simpledbv2 get-export \
--export-arn 'arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01' 

#Example Output

{
    "exportArn": "arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01",
    "clientToken": "ad9ac782-954a-45d1-8d47-8ef843c0ffe2",
    "exportStatus": "",
    "domainName": "testDomain",
    "requestedAt": "2026-02-06T11:57:09.953000+00:00",
    "s3Bucket": "cellbucket",
    "s3BucketOwner": "111122223333",
    "exportManifest": "/AmazonSimpleDB/testDomainDoc/arn:aws:sdb:ap-southeast-2:822078811998:domain/testDomainDoc/export/6f7ed325-e8cc-4ffe-aa56-b4f6fa8a0fc5/manifest-summary.json",
    "itemsCount": 100,
    "exportDataCutoffTime": "2026-02-06T11:57:09.953000+00:00"
}

Step 5: Validating the export in Amazon S3

When the export status is SUCCEEDED, verify that the data was exported successfully. You can now view and analyze the exported data files in the Amazon S3 bucket.

  • Data: The data files with the actual domain data will be written to data/. The JSON objects in the data file correspond to your SimpleDB items. Each object has name and attributes of the item stored against itemName and attributes keys respectively. attributes is an array of objects with name and values of different attributes stored against name and value keys respectively.


Here’s a sample object:

[
  {
    "itemName": "employee1",
    "attributes": [
      {
        "name": "first_name",
        "values": [
          "John"
        ]
      },
      {
        "name": "age",
        "values": [
          "30"
        ]
      }
    ]
  },
  {
    "itemName": "employee2",
    "attributes": [
      {
        "name": "first_name",
        "values": [
          "Jane"
        ]
      },
      {
        "name": "reportees",
        "values": [
          "Jade", "Judith"
        ]
      }
    ]
  }
]

You will also find two manifest files that allow you to verify the integrity and discover the location of the S3 objects in the data sub-folder. For more information, see Understanding Exported Data in Amazon S3.

If a failure occurs during export processing, exportStatus will transition to FAILED , and FailureCode and FailureMessage will be returned in the GetExport APIs response.

Sample execution and response:

aws simpledbv2 get-export \
--export-arn 'arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01'
#Example Output 
{
  "exportArn": "arn:aws:sdb:us-east-1:111122223333:domain/testDomain/export/3eb4eaed-872b-4e08-b4b6-ff6999a83e01",
  "clientToken": "ad9ac782-954a-45d1-8d47-8ef843c0ffe2",
  "exportStatus": "",
  "domainName": "testDomainDoc",
  "requestedAt": "2026-02-03T13:38:00.347000+00:00",
  "s3Bucket": "test-export-doc-bucket-wrong",
  "s3BucketOwner": "822078811998",
  "failureCode": "",
  "failureMessage": "The specified bucket does not exist.",
  "exportDataCutoffTime": "2026-02-03T13:38:00.347000+00:00"
}

Listing export jobs

To list all exports that were created, you can use the ListExports operation. The API returns all exports that were created within past 3 months. The results are paginated and can be filtered by domain name. The following is a sample CLI command and response:

aws simpledbv2 list-exports

Sample execution and response:

aws simpledbv2 list-exports 

#Example Output
{
  "exportSummaries": [
    {
      "exportArn": "arn:aws:sdb:ap-southeast-2:822078811998:domain/testDomainDoc/export/3677e7cd-ca7a-47e2-9d24-2b86115503a6",
      "exportStatus": "SUCCEEDED",
      "requestedAt": "2026-02-03T13:32:04.394000+00:00",
      "domainName": "testDomainDoc"
    },
    {
      "exportArn": "arn:aws:sdb:ap-southeast-2:822078811998:domain/testDomainDoc/export/2890f3b6-a683-4277-adb6-c76a9b434b75",
      "exportStatus": "FAILED",
      "requestedAt": "2026-02-03T13:38:00.347000+00:00",
      "domainName": "testDomainDoc"
    },
    {
      "exportArn": "arn:aws:sdb:ap-southeast-2:822078811998:domain/testDomainDoc/export/6f7ed325-e8cc-4ffe-aa56-b4f6fa8a0fc5",
      "exportStatus": "SUCCEEDED",
      "requestedAt": "2026-02-06T11:57:09.953000+00:00",
      "domainName": "testDomainDoc"
    }
  ]
 }

Important considerations:

Before running exports at scale, consider the following operational aspects that can influence export behavior, storage usage, and request limits.

  1. Performance and scalability: The export process is serverless and scales automatically, operating independently of your existing SimpleDB read workloads so that export operations won’t impact your application’s performance or availability.
  2. Asynchronous processing model: The functionality is designed as an asynchronous process primarily targeted for migration and archival use cases. Exports may take some time to complete, so plan accordingly and check completion status regularly using the GetExport API.
  3. Domain management during exports: Domain deletion will be blocked while any export request is pending or in progress for that domain. Plan your domain lifecycle management accordingly, verifying all necessary exports are completed before attempting domain deletion.
  4. Export frequency planning and cost considerations:
    • Complete and non incremental exports: All exports are full, non-incremental snapshots of your domain data. This means every export contains the complete dataset rather than just changes since the last export.
    • Rate limits: For service stability, the following limits are in place:
      • 5 exports per domain within a rolling 24-hour window
      • 25 exports per AWS account within a rolling 24-hour window
    • Cost impact: While Amazon SimpleDB doesn’t charge for the export operations themselves, you will incur Amazon S3 costs for storage, API calls, and data transfer according to standard S3 pricing. Additionally, since data is exported in JSON format with additional metadata keywords, the exported data size will be larger than the raw data stored in SimpleDB.
      For example, a domain approaching the maximum allowed size of 10 GB may result in exported data well exceeding 10 GB due to the JSON formatting overhead.

    Consider the operational impact of repeated full exports, as they can increase storage usage and consume export rate limits over time.

  5. Data consistency and timing: Exports are not point-in-time snapshots. Instead, they work with an exportDataCutoffTime—all data inserted before this timestamp will be included in the export. If your application continues writing to SimpleDB during the export process, newer data may not be captured in the current export.
  6. Understanding exportDataCutoffTime:
    • This timestamp is returned in the GetExport API response and represents a time closer to when actual domain processing begins, rather than when the export was initially requested.
    • Any items inserted at timestamps before this time are guaranteed to be included in the export.
    • Any items inserted at timestamps after this time are guaranteed to not be included in the export.
    • However, for items that do exist in the export, any updates (including deletes) made to the item itself or the item’s attributes at timestamps after this cutoff time may or may not be included in the export.

    Since exports capture data up to the cutoff time and you may run multiple exports over time, deduplication will need to be handled on your end when processing or migrating the exported data.

  7. Security and data integrity:
    • Follow Amazon S3 security best practices for storing exported data.
    • Use the provided manifest-summary.json file to verify item counts and MD5 checksums for each data file to ensure export completeness and integrity of data.

Clean up

After you validate that the export completed successfully and the data is safely stored in Amazon S3, you may want to clean up temporary resources created during the export process.

If you created a dedicated S3 bucket for testing or one-time exports, you can remove exported objects or delete the bucket when it is no longer needed:

aws s3 rm s3://<my-simpledb-export-bucket> --recursive

If the bucket itself is no longer required, delete it after removing all objects:

aws s3 rb s3://<my-simpledb-export-bucket>

You may also remove temporary IAM roles or policies that were created specifically for export operations if they are not needed for future exports.

If your objective was to extract and retain domain data before decommissioning workloads, you can review and delete unused domains from Amazon SimpleDB once all required exports are complete. Performing these clean up steps helps reduce unnecessary storage costs and simplifies resource management.

Conclusion

By exporting your SimpleDB domain data to Amazon S3, you can retain a complete JSON snapshot of your items and attributes in durable, cost-effective storage. This capability focuses on dependable data extraction and preservation, giving you a verified source dataset that you can review and manage as part of your next steps.

We encourage you to review your current Amazon SimpleDB domains, define your data retention or migration strategy, and begin your export today using the AWS CLI or SDK. By taking these steps now, you position your organization to maintain full control of your data and establish a reliable source dataset in Amazon S3 for archival, transformation, or future modernization initiatives. This approach gives you a practical foundation for scalability and cloud-native evolution based on your target architecture and data requirements.


About the authors

Deepthi Cyril George

Deepthi Cyril George

Deepthi is a Senior Software Engineer with Amazon SimpleDB at AWS. She has 14 years of experience building large-scale distributed cloud and data systems. When not building for the cloud, she enjoys exploring new restaurants and traveling.

Vijay Karumajji

Vijay Karumajji

Vijay is a Principal Database Specialist Solutions Architect at AWS, where he partners with customers to design scalable, secure, and cloud-native database architectures. With over two decades of experience in both commercial and open-source databases, Vijay brings deep technical expertise to help organizations modernize their data platforms and maximize the value of AWS-managed database services.

Ankur Saini

Ankur Saini

Ankur is a Senior Software Development Engineer at AWS. He has over five years of experience working on services including Amazon SimpleDB and Amazon Aurora, where he focuses on building and operating distributed systems that power large-scale cloud database services.