We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.
If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by selecting Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
Your privacy choices
We display ads relevant to your interests on AWS sites and on other properties, including cross-context behavioral advertising. Cross-context behavioral advertising uses data from one site or app to advertise to you on a different company’s site or app.
To not allow AWS cross-context behavioral advertising based on cookies or similar technologies, select “Don't allow” and “Save privacy choices” below, or visit an AWS site with a legally-recognized decline signal enabled, such as the Global Privacy Control. If you delete your cookies or visit this site from a different browser or device, you will need to make your selection again. For more information about cookies and how we use them, please read our AWS Cookie Notice.
К сожалению, данный материал на выбранном языке не доступен. Мы постоянно работаем над расширением контента, предоставляемого пользователю на выбранном языке. Благодарим вас за терпение!
Data Transfer from Amazon S3 Glacier Vaults to Amazon S3 restores, copies, and transfers archives stored in an Amazon Simple Storage Service Glacier (Amazon S3 Glacier) vault to an S3 bucket and storage class of your choice, including the S3 Glacier storage classes. This AWS Solution simplifies the use of your data by automating the transfer process, making archived data more accessible and cost-effective.
Note:
Amazon S3 Glacier storage classes, including Glacier Deep Archive, Glacier Flexible Retrieval, and Glacier Instant Retrieval, are different from the S3 storage classes. Visit this webpage to learn more about these storage classes.
Benefits
Transfer automation
Automation saves time and minimizes the likelihood of human error during the data transfer process, helping ensure a more reliable and consistent operation.
Enhanced data utilization and analysis
Transferring data from Amazon S3 Glacier vaults to S3 buckets facilitates easier data analysis and utilization. Data is more readily accessible for applications and analytics tools, without extended restore times.
Data accessibility, simplified
Amazon S3 storage classes allow for tagging and quicker access to your data. Tagging benefits include data classification, fine-grained access control, lifecycle management, and cost allocation.
Cost optimization
For data that is rarely accessed, the Amazon S3 Glacier Deep Archive storage class can save almost 75% on storage costs in the AWS US East (Ohio) Region compared to an S3 Glacier vault.
Step 12 After a new chunk is downloaded, the solution stores chunk metadata in Amazon DynamoDB (etag, checksum_sha_256, tree_checksum).
Step 13 The Lambda Chunk Retrieval function verifies whether all chunks for that archive have been processed. If yes, it inserts an event into the Amazon SQS Validation queue to invoke the Lambda Validate function.
Step 14 The Lambda Validate function performs an integrity check and then closes the Amazon S3 multipart upload.
Step 15 A DynamoDBstream invokes the Lambda Metrics Processor function to update the transfer process metrics in DynamoDB.
Step 16 The Step Functions Orchestrator workflow enters an async wait, pausing until the archive retrieval workflow concludes before initiating the Step Functions Cleanup workflow.
Step 17 The DynamoDB stream invokes the Lambda Async Facilitator function, which unlocks asynchronous waits in Step Functions.
Step 18 The Amazon EventBridge rules periodically initiate Step Functions Extend Download Window and Update Amazon CloudWatch Dashboard workflows.
Step 19 Monitor the transfer progress by using the CloudWatch dashboard.
Step 6 The solution stores all job completion notifications in the Amazon Simple Queue Service (Amazon SQS) Notifications queue.
Step 7 When an archive job is ready, the Amazon SQS Notifications queue invokes the AWS Lambda Notifications Processor function. This Lambda function prepares the initial steps for archive retrieval.
Step 8 The Lambda Notifications Processor function places chunks retrieval messages in the Amazon SQS Chunks Retrieval queue for chunk processing.
Step 9 The Amazon SQS Chunks Retrieval queue invokes the Lambda Chunk Retrieval function to process each chunk.
Step 10 The Lambda Chunk Retrieval function downloads the chunk from the Amazon S3 Glacier vault.
Step 12 After a new chunk is downloaded, the solution stores chunk metadata in Amazon DynamoDB (etag, checksum_sha_256, tree_checksum).
Step 13 The Lambda Chunk Retrieval function verifies whether all chunks for that archive have been processed. If yes, it inserts an event into the Amazon SQS Validation queue to invoke the Lambda Validate function.
Step 14 The Lambda Validate function performs an integrity check and then closes the Amazon S3 multipart upload.
Step 15 A DynamoDBstream invokes the Lambda Metrics Processor function to update the transfer process metrics in DynamoDB.
Step 16 The Step Functions Orchestrator workflow enters an async wait, pausing until the archive retrieval workflow concludes before initiating the Step Functions Cleanup workflow.
Step 17 The DynamoDB stream invokes the Lambda Async Facilitator function, which unlocks asynchronous waits in Step Functions.
Step 18 The Amazon EventBridge rules periodically initiate Step Functions Extend Download Window and Update Amazon CloudWatch Dashboard workflows.
Step 19 Monitor the transfer progress by using the CloudWatch dashboard.
Step 6 The solution stores all job completion notifications in the Amazon Simple Queue Service (Amazon SQS) Notifications queue.
Step 7 When an archive job is ready, the Amazon SQS Notifications queue invokes the AWS Lambda Notifications Processor function. This Lambda function prepares the initial steps for archive retrieval.
Step 8 The Lambda Notifications Processor function places chunks retrieval messages in the Amazon SQS Chunks Retrieval queue for chunk processing.
Step 9 The Amazon SQS Chunks Retrieval queue invokes the Lambda Chunk Retrieval function to process each chunk.
Step 10 The Lambda Chunk Retrieval function downloads the chunk from the Amazon S3 Glacier vault.
Step 12 After a new chunk is downloaded, the solution stores chunk metadata in Amazon DynamoDB (etag, checksum_sha_256, tree_checksum).
Step 13 The Lambda Chunk Retrieval function verifies whether all chunks for that archive have been processed. If yes, it inserts an event into the Amazon SQS Validation queue to invoke the Lambda Validate function.
Step 14 The Lambda Validate function performs an integrity check and then closes the Amazon S3 multipart upload.
Step 15 A DynamoDBstream invokes the Lambda Metrics Processor function to update the transfer process metrics in DynamoDB.
Step 16 The Step Functions Orchestrator workflow enters an async wait, pausing until the archive retrieval workflow concludes before initiating the Step Functions Cleanup workflow.
Step 17 The DynamoDB stream invokes the Lambda Async Facilitator function, which unlocks asynchronous waits in Step Functions.
Step 18 The Amazon EventBridge rules periodically initiate Step Functions Extend Download Window and Update Amazon CloudWatch Dashboard workflows.
Step 19 Monitor the transfer progress by using the CloudWatch dashboard.
Step 6 The solution stores all job completion notifications in the Amazon Simple Queue Service (Amazon SQS) Notifications queue.
Step 7 When an archive job is ready, the Amazon SQS Notifications queue invokes the AWS Lambda Notifications Processor function. This Lambda function prepares the initial steps for archive retrieval.
Step 8 The Lambda Notifications Processor function places chunks retrieval messages in the Amazon SQS Chunks Retrieval queue for chunk processing.
Step 9 The Amazon SQS Chunks Retrieval queue invokes the Lambda Chunk Retrieval function to process each chunk.
Step 10 The Lambda Chunk Retrieval function downloads the chunk from the Amazon S3 Glacier vault.
Step 12 After a new chunk is downloaded, the solution stores chunk metadata in Amazon DynamoDB (etag, checksum_sha_256, tree_checksum).
Step 13 The Lambda Chunk Retrieval function verifies whether all chunks for that archive have been processed. If yes, it inserts an event into the Amazon SQS Validation queue to invoke the Lambda Validate function.
Step 14 The Lambda Validate function performs an integrity check and then closes the Amazon S3 multipart upload.
Step 15 A DynamoDBstream invokes the Lambda Metrics Processor function to update the transfer process metrics in DynamoDB.
Step 16 The Step Functions Orchestrator workflow enters an async wait, pausing until the archive retrieval workflow concludes before initiating the Step Functions Cleanup workflow.
Step 17 The DynamoDB stream invokes the Lambda Async Facilitator function, which unlocks asynchronous waits in Step Functions.
Step 18 The Amazon EventBridge rules periodically initiate Step Functions Extend Download Window and Update Amazon CloudWatch Dashboard workflows.
Step 19 Monitor the transfer progress by using the CloudWatch dashboard.
Step 6 The solution stores all job completion notifications in the Amazon Simple Queue Service (Amazon SQS) Notifications queue.
Step 7 When an archive job is ready, the Amazon SQS Notifications queue invokes the AWS Lambda Notifications Processor function. This Lambda function prepares the initial steps for archive retrieval.
Step 8 The Lambda Notifications Processor function places chunks retrieval messages in the Amazon SQS Chunks Retrieval queue for chunk processing.
Step 9 The Amazon SQS Chunks Retrieval queue invokes the Lambda Chunk Retrieval function to process each chunk.
Step 10 The Lambda Chunk Retrieval function downloads the chunk from the Amazon S3 Glacier vault.
Related content
What is Amazon S3 Glacier?
S3 Glacier is a secure and durable service for low-cost data archiving and long-term backup using vaults.
Data Transfer from Amazon S3 Glacier Vaults to Amazon S3 Workshop
This self-paced Workshop provides a step-by-step guide for launching the AWS Solution, Data Transfer from Amazon S3 Glacier Vaults to Amazon S3, in your AWS account.