AWS Cloud Financial Management

Putting AWS re:Invent 2021 cost optimization announcements into practice

Following re:Invent 2021 and all the amazing announcements, there are so many new and exciting ways for customers to optimize cloud spend. This blog highlights some opportunities that customers could really benefit from, and offers guidance on how to take advantage of them, and, therefore, the savings. If you want to hear about other re:Invent announcements, check out the Top Announcements for AWS re:invent 2021 post or watch our re:Invent episode from ‘The Keys to AWS Optimization – Twitch Series’.

I’m going to be highlighting four different announcements and show you how you can use them to save money. We’ll start with Amazon DynamoDB, which now has a standard-infrequent access storage class, dive deep on Amazon S3 Glacier Instant Retrieval and the new Amazon S3 Intelligent-Tiering Archive Instant Access tier, talk about the new Amazon Elastic Block Store (Amazon EBS) Snapshots Archive, and then wrap with Graviton2 for AWS Fargate. Lets get started!

Amazon DynamoDB Standard-Infrequent Access (Standard-IA)

Opportunity: Reduce costs by up to 60% for infrequently accessed data

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB is popular in transactional use cases where applications need low-latency data access at any scale. Such use cases include bookmarks and watchlists in media streaming, user profile transaction history and player session history in games.

At re:Invent 2021, we announced the new Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, which helps you reduce your DynamoDB costs by up to 60% for tables that store infrequently accessed data. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, old social media posts, e-commerce order history, and past gaming achievements. For example, Retail customers occasionally want to look up an older order to re-purchase an item or get product information. With DynamoDB Standard-IA, you can retain infrequently accessed historical customer orders at a lower cost. You can switch between DynamoDB Standard and DynamoDB Standard-IA table classes with no impact on table performance, durability, or availability and without changing your application code.

So, how can you determine if you have a good use case for this new option?

  1. If your storage exceeds 50% of your throughput cost (reads and writes). This usage pattern is a good indicator that you have storage that you are not reading or writing on a regular basis.
  2. If you have ETL to archive data to services like Amazon S3. Rather than continuing to move data, it can be kept in the table at a lower cost.
  3. If you use rolling tables (creating new tables day/month). In this case, the older tables may be less frequently read; therefore, perfect for infrequent access.

The simplest call to action from above is to checkout your Amazon DynamoDB spend in your Cost and Usage Report by using this query from the Well Architected Query Library. If your storage exceeds 50% of your throughput cost then look at the DynamoDB Standard-IA tier.

Amazon S3 Glacier Instant Retrieval and the new Amazon S3 Intelligent-Tiering Archive Instant Access tier

Opportunity: Save up to 68% on storage costs for rarely accessed data that requires milliseconds retrieval

At re:Invent, we launched Amazon S3 Glacier Instant Retrieval, an archive storage class that delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. S3 Glacier Instant Retrieval is the ideal storage class if you have data that is accessed once a quarter and requires milliseconds retrieval times, such as medical images, sports broadcasting highlights, and photos and videos uploaded to a photo sharing website. For example, as an end user of a photo sharing website, you upload photos that you expect to treasure forever. You’ll share it with your friends and family, and that photo will likely get accessed a few times within the first few weeks, but is likely rarely accessed again. However, you may come back to that website a few months or even years later, and you wouldn’t want to wait long to retrieve the photo. This scenario depicts a use case that is ideal for S3 Glacier Instant Retrieval.

In deciding which Amazon S3 storage class best fits your workload, consider the access patterns and retention time of your data. Many workloads have changing, unpredictable, or unknown access patterns, and that is why Amazon S3 Intelligent-Tiering is used by many customers as their default storage class to automatically save on storage costs. And, at re:Invent, we announced that S3 Intelligent-Tiering now automatically includes a new Archive Instant Access tier with cost savings of up to 68% for rarely accessed data that needs milliseconds retrieval. If you are using S3 Intelligent-Tiering today, you can use AWS Cost Explorer to measure the additional savings from the Archive Instant Access tier.

You can use S3 Storage Class Analysis to help decide which objects to transition to the right storage class to optimize cost. Another helpful tool is this AWS Well-Architected Labs query that looks at metadata and request activity to suggest S3 buckets that are infrequently or rarely accessed. To support this data, you can also review your Amazon S3 usage in Amazon S3 Storage Lens and visit the “5 Ways to reduce data storage costs using Amazon S3 Storage Lens” blog to identify other cost savings opportunities.

Amazon Elastic Block Store (Amazon EBS) Snapshots Archive

Opportunity: Lower your cost of Snapshot storage up to 75%

Amazon EBS Snapshots Archive is a lower storage cost tier, which stores a full copy of your point-in-time Amazon EBS Snapshots. The key phrase here is ‘full copy’. This is different to the EBS Snapshot Standard storage tier, where new snapshots are incremental and reference other snapshots in the lineage; so you only pay for the changes. Archive includes the full snapshot, and you pay a lower rate.

At a high level, good candidates for Archive are snapshots thats blocks have been changed more than 25% of the size of your full snapshot. Let’s break this down: If you have a volume that is 100GB that has a snapshot lifecycle policy, the first snapshot needs to stay in Standard. Following that, if you only have small changes to the blocks, i.e., less than 25% changed, then you should stay in Standard. Any more than 25% unique bytes and it becomes more cost efficient to move, as long as you want to retain the snapshot for 90 days or more. See more details on the Amazon EBS Snapshots page.

With customers having hundreds or thousands of snapshots in their accounts, the big question is: how can you identify the prime candidates for EBS Snapshots Archive? To get started let’s look at when you would like to keep a full snapshot.

We often see customers needing full snapshots in standard scenarios as part of the patch or update process. This is usually when they are trying to capture an old or gold version of an instance that they may want an archive, or are required to have a copy as part of compliance.

  1. If you are moving a copy of your volumes into S3 for backups, then you can move these into Archive for ease and a lower cost
  2. If you are creating stand alone AMI and want to keep a gold version of this, or before you make the AMI, you create a snapshot of the volumes as a backup
  3. If you have an unattached volume and you need need to keep the data for compliance, since it’s unattached and data won’t be changed on the volume, it wont be part of a snapshot lineage. Then it can be snapshotted and moved to the archive tier.
  4. If it’s the end of the project and you need to keep the snapshots around for legal/ compliance, then this is a good candidate since it’s a point-in-time view.

None of these are likely to be part of a lifecycle policy and so, as single snapshots, they are prime candidates for EBS Snapshots Archive.

If you would like to see more about how your snapshots and EBS volumes are connected, you can do it in the AWS console. Or, setup the AWS Well Architected Optimization Data Collector lab inventory module to pull together a data set of all your snapshots and EBS volumes and use this query to combine them.

There are some things to keep an eye out for when looking at EBS Archive. The snapshots in Archive do not have any references to other snapshots in the lineage. EBS Snapshots Archives have a minimum retention period of 90 days. You will incur a cost of $0.03/GB for restores with typical restore times of 24-72 hours. To find out more checkout the archiving guide.

Graviton2 Support for AWS Fargate

Opportunity: Get up to 40% better price-performance for your Serverless Containers

2021 had lots of great Graviton Announcements. AWS Lambda Functions Powered by AWS Graviton2, AWS Graviton2-based instances for Amazon Neptune and the AWS Graviton Ready Partners Solution. However, in this section, I wanted to focus on AWS Fargate, which is joining the club of AWS managed services that can run on Graviton2, giving you up to 40% more price-performance. To read more about how to change your Fargate instances, I would recommend this AWS Blog ‘Announcing AWS Graviton2 Support for AWS Fargate – Get up to 40% Better Price-Performance for Your Serverless Containers’.

The move to Graviton2 is simple as long your application is ARM64 compatible and you are using Fargate Platform Version (PV) 1.4.0 or later. To check, information can be found on the AWS Graviton Getting Started GitHub, and you can use the AWS Fargate platform versions section in the AWS documentation to learn how to migrate. To take advantage of these savings, you only have to make a small change to your task parameter for cpuArchitecture from X84_64 to ARM64 and boom! You are done!

Now, if you are not able to utilize Graviton2 for AWS Fargate yet, and you still want to optimize your cost for Amazon Elastic Container Service, fear not! There are lots of other ways you can save. This handy blog, ‘Cost Optimization Checklist for Amazon ECS and AWS Fargate’, will walk you through methods such as rightsizing, autoscaling, and utilizing Spot.

So now is a great time for you to take a look at your DynamoDB, S3, EBS, and Fargate spend. See if you can take advantage of these features, as none of them require a fundamental change to the way your application is deployed and managed. Let us know how you get on!

Conclusion

In summary, the key with these announcements, and the general use of AWS, is ensuring you use the right service for your needs. Choosing the right size, tier, or engine for your resource can really impact your cost optimization. Hopefully this blog has highlighted some new options you have to help you on your Cloud Financial Management journey.