Storing data with partner backup solutions and Amazon S3 Glacier Deep Archive
AWS recently launched Amazon S3 Glacier Deep Archive, a new storage class that provides the most economical storage currently available in the cloud. This storage class offers you another option at a price point even lower than storing data on tapes in an offsite facility.
This post describes the benefits of using S3 Glacier Deep Archive, which is available through the S3 API, and how it works with partner backup solutions.
Cloud computing is now the new normal. Among the first workloads that customers move to the cloud are backup and archive workloads.
Most major backup providers support backing up to S3, either directly through a cloud connector or through gateways such as AWS Storage Gateway. You can move your data out of the data center or use AWS as an offsite copy of backup data. By using S3, you can take advantage of the 11 nines of durability that S3 offers, while protecting your data across three Availability Zones.
AWS has different tiers of storage called storage classes designed for different use cases and access patterns. Some customers wanted an even-colder storage class that could allow them to replace all their offsite tapes. They didn’t need immediate access to this data, and they wanted to be able to store it for a long time at a low price point.
In response, AWS launched Amazon S3 Glacier Deep Archive, a new storage class that helps you address this need. This new storage class gives you the same durability and protection across three Availability Zones as the other S3 storage classes. S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. S3 Glacier Deep Archive can be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes.
To balance speed and cost, S3 Glacier Deep Archive has two retrieval options:
- Standard retrieval: Allows you to retrieve archived data in just 12 hours.
- Bulk retrieval: Allows you to retrieve petabytes of data inexpensively within 48 hours.
Both options enable AWS Partner Network (APN) Technology Partner backup-and-restore solutions to provide a much faster response time than traditional tape management to an offsite facility.
Partner backup solutions
APN storage backup-and-restore partners are already supporting this new storage class. Because you may already have existing relationships with APN Technology Partners, support for the S3 Glacier Deep Archive storage class gives you an easy way to take advantage of its economics and other benefits.
For solutions that don’t yet support this storage class, you can still use this storage class with your backup solutions by replacing your physical tape library with AWS Storage Gateway. This service runs as a virtual tape library (VTL) in Tape Gateway mode, which also supports S3 Glacier Deep Archive.
If you have on-premises backup-and-restore deployments, you can move data to the S3 Glacier Deep Archive storage class using the same technology that you’ve been using for years. You don’t have to make major changes to your workflow, so you can make a seamless transition to modernize your backup infrastructure. You can quickly realize the benefit of this new storage class’ lower costs for archiving and long-term retention.
Native cloud connectors
Using native cloud connectors, APN Technology Partners let you choose an S3 target based on length of retention, cost, recovery time objective (RTO), and the likelihood of restore. Adding a cloud repository to your existing backup solutions is as simple as choosing an S3 storage class and adding the authentication necessary to access or create the bucket or archive. After you add the cloud repository, use cloud connectors to send backups directly to AWS.
There may be some scenarios where you want restores to be available immediately. In those cases, choose an S3 storage class that has synchronous access:
- S3 Standard
- S3 Intelligent-Tiering
- S3 Standard-Infrequent Access (S3 Standard-IA)
- S3 One Zone-Infrequent Access (S3 One Zone-IA)
For backups retained for long-term compliance that are rarely accessed, choose an asynchronous archive storage class such as S3 Glacier or S3 Glacier Deep Archive for a more cost-effective choice.
Commvault and Veritas, APN Advanced Technology and AWS Storage Competency Partners, were launch partners for S3 Glacier Deep Archive. CloudBerry Lab, also an APN Advanced Technology and AWS Storage Competency Partner, followed quickly, supporting S3 Glacier Deep Archive shortly after launch.
Commvault’s backup architecture includes a Commserve server and Commvault MediaAgents. The Commvault MediaAgent connects directly to Amazon S3 and supports S3 Standard, Standard Infrequent-Access, One Zone-Infrequent Access, Intelligent-tiering, Glacier, and Glacier Deep Archive.
After you select your S3 storage class and input your AWS key pair for authentication, you can create the target cloud library. For on-premises backups, a local MediaAgent sends the data to the S3 storage class, where it is compressed and deduplicated before being sent to the cloud library on S3.
By default, data is transferred to the cloud library through secured channels using the HTTPS protocol. Further protection is available from Commvault’s FIPS 140-2 certified data-encryption feature providing data encryption in flight as well as at rest. Commvault gives users control over when data is sent to S3 cloud libraries by retention policies managed on the CommServe server.
In addition to using the data in the cloud library as a backup repository for long-term retention, Commvault adds orchestration to recover on-premises virtual machines and servers to a virtual private cloud (VPC).
Commvault’s support for S3 Glacier Deep Archive includes support for directly sending data to the storage class or using lifecycle policies. The support also includes the ability to use deduplication to create additional cost savings. For more information, see How is Data Stored and Managed in the Various Amazon S3 Archive Storage Classes.
The Veritas NetBackup solution includes a master server, media servers, and a cloud connector. The cloud connector can be configured to let media servers directly store data on S3 storage classes, including S3 Glacier Deep Archive. Data stored with the cloud connector is compressed. Alternatively, data can be deduplicated before an additional appliance, NetBackup CloudCatalyst, sends it to S3.
With version 8.2 of NetBackup, Veritas greatly enhanced its support for AWS, including support for new services and improving existing functionality. One new enhancement is support for deduplication of data stored on all Glacier storage classes. In addition to the launch support with the cloud connector, when you upgrade to 8.2, you can send deduplicated data directly to S3 Glacier Deep Archive for additional cost savings.
When configuring the cloud connector, you can either go directly to Glacier or add a lifecycle policy for transitioning data to a different S3 storage class based on your desired retention. For more information about NetBackup support for S3 storage classes and configuration, see the NetBackup Cloud Administrator’s Guide.
CloudBerry Backup for Amazon S3 provides agents that install on workstations, servers, and virtual environments. The agents send data in compressed, encrypted, and deduplicated format to S3 Storage Classes, including S3 Glacier Deep Archive.
In addition to backup, S3 Glacier Deep Archive also is supported on CloudBerry Explorer. This cloud file manager provides a local interface allowing users to access, move, and manage files across local storage and S3 Storage Classes, including S3 Glacier Deep Archive. For more information, see the CloudBerry website.
S3 lifecycle policies
As shown earlier, some partners use object lifecycle management in Amazon S3. Data can transition over time via policies set on an S3 bucket, an object prefix, or an object tag value.
These values are often set in the partner management console, depending on the APN Technology Partner. For example, an object can tier from S3 Standard to S3 Standard-Infrequent Access after 30 days and then move to S3 Glacier Deep Archive after 60 days. For more information about the cost of lifecycle transition, see the Amazon S3 pricing page.
AWS Storage Gateway
If you have backup solutions without native connectors or lifecycle support, the Tape Gateway version of the Storage Gateway service supports S3 Glacier Deep Archive.
Tape Gateway works with existing backup software tools and acts as a drop-in replacement for physical tape infrastructure. This way, you don’t need to change existing tape management workflows. Storage Gateway also supports workloads requiring compliance with various regulations, such as HIPAA, PCI, SOC (1,2,3), and ISO (9001, 27001, 27017, 27018).
If you must store data for long-term retention, compliance, or offsite tape replacement, S3 Glacier Deep Archive provides a cost-effective solution. With support through lifecycle policies, AWS Storage Gateway, and APN Technology Partner solutions, you have many options to start getting the benefits of this new storage class.
S3 Glacier Deep Archive has a price point that not only is the lowest currently available for cloud storage, but is also competitive with the price of offsite tape storage. Using this storage class in combination with partner solutions offering deduplication creates compelling TCO savings. It offers a strong argument to make the transition from tapes to the cloud for all of your backup and archive data.
To learn more, read Jeff Barr’s launch post, New Amazon S3 Storage Class – S3 Glacier Deep Archive or see Storage Classes. If you’re ready to get started with APN Technology Partner solutions to use this new storage class, see AWS Storage Solutions.