Amazon ElastiCache features

Why Amazon ElastiCache?

Amazon ElastiCache is a fully managed, Valkey-, Memcached-, and Redis OSS-compatible service that delivers real-time, cost-optimized performance, and up to 99.99% high availability for modern applications. ElastiCache is ideal for high-performance use cases such as data caching, web, mobile apps, healthcare apps, financial apps, gaming, ad-tech, IoT, media streaming, session stores, leaderboards, machine learning (ML), and microservices-based applications. Refer to our Amazon ElastiCache use cases to learn how ElastiCache can help.

ElastiCache speeds up database and application performance, scaling to hundreds of trillions of requests per day with microsecond response time. Benefits include enhanced security, reliability, scalability, and performance when compared to open source alternatives. It also unlocks cost savings for read-heavy workloads and provides cost-optimization features such as data tiering for memory-intensive workloads.  

ElastiCache now supports Valkey, which is an open source project that is a drop in replacement of Redis OSS, and is priced up to 33% lower than other supported engines. 

Serverless

With Amazon ElastiCache Serverless, you can create a highly available cache in under a minute without infrastructure provisioning or configuration. You can create an ElastiCache Serverless cache in a few steps by specifying a cache name in the AWS Management Console, AWS Software Development Kit (AWS SDK), or AWS Command Line Interface (AWS CLI).

Watch Introducing Amazon ElastiCache Serverless for a brief overview.

ElastiCache Serverless removes the complex, time-consuming process of capacity planning by continuously monitoring a cache’s compute, memory, and network use and instantly scaling vertically and horizontally to meet demand.

With our pay-for-use billing model, you do not have to worry about how quickly ElastiCache Serverless scales back capacity after you scale down a workload. You only pay for the data you store and the compute your application uses. Visit the ElastiCache pricing page to learn more.

You can use ElastiCache for Valkey starting as low as $6/month with 33% lower pricing on ElastiCache Serverless as compared to other supported engines. 

Easy to use

With ElastiCache Serverless, you can simply create a new serverless cache in under a minute using the console, AWS CLI, or AWS SDKs, without needing to manage infrastructure or capacity. If you are designing your own cluster, resources are preconfigured with the appropriate parameters and settings and cache parameter groups enable granular control for fine-tuning of your environment.

Refer to our documentation to learn how to quickly get started with ElastiCache.

ElastiCache is a fully managed service. We automate time-consuming management tasks—such as capacity planning, software patch management, failure detection, and recovery—allowing you to pursue higher value application development. You get built-in access to the underlying in-memory database environment, making it straightforward to use ElastiCache with your existing Valkey, Memcached, and Redis OSS tools and applications. With ElastiCache Serverless, all minor version updates, performance enhancements, and security patches are automatically applied with no configuration required and without application disruption.

You can use the console for Amazon Relational Database Service (Amazon RDS) and Amazon Aurora to create an ElastiCache cluster and attach it to your relational database. By doing so, you can accelerate application performance with faster reads and reduce costs. Learn more about creating and attaching an ElastiCache cluster in Amazon RDS and in Aurora.

Amazon CloudWatch metrics provide insights to your ElastiCache resources at no additional charge. You can use the console to view over 40 key operational metrics for your instances, including compute, utilized memory, cache hit ratio, active connections, replication, and commands. To learn more about monitoring your cache cluster, refer to our documentation on monitoring CloudWatch metrics for ElastiCache.

ElastiCache publishes messages about notable events. ElastiCache Serverless events including new cache creation, deletions, and cache configuration updates are sent to Amazon EventBridge. When working with self-designed cache clusters, ElastiCache sends events to Amazon Simple Notification Service (Amazon SNS).

Benefit from the ability to tag your ElastiCache resources and snapshots for tracking and billing purposes. You can use AWS Cost Explorer to attribute costs to resources and resource groups to create and maintain collections of resources that share a common set of tags. To learn more about tagging your ElastiCache resources, refer to the documentation on ElastiCache tagging.

ElastiCache provides built-in support for JSON documents in addition to data structures included in Valkey and Redis OSS. You can simplify application development by using the built-in commands designed and optimized for JSON documents. ElastiCache supports partial JSON document updates as well as powerful searching and filtering using the JSONPath query language. JSON support is available when using ElastiCache version 7.2 for Valkey and ElastiCache version 6.2 for Redis OSS and above.

Performance and scalability

ElastiCache helps improve application performance and increase throughput for read-heavy workloads by removing the need to access disk-based databases for frequently accessed data. ElastiCache can scale to millions of operations per second with microsecond response times.

ElastiCache offers a 99.99% Service Level Agreement (SLA) when using a multi-AZ or serverless configuration. ElastiCache Serverless automatically stores data redundantly across multiple Availability Zones, with no user configuration required. When designing your own cache cluster, you can take advantage of multiple AWS Availability Zones by creating replicas in multiple Availability Zones to achieve high availability and scale read traffic. In the case of primary node loss, AWS automatically detects the failure and failover to a read replica to provide higher availability without the need for manual intervention. Read more about high availability using replication groups and minimizing downtime in ElastiCache with multiple Availability Zones.

ElastiCache Serverless automatically and elastically scales to meet application performance demands. ElastiCache Serverless continuously monitors the memory, compute, and network bandwidth used on the cache by your application. It enables the cache to scale up in place, while also scaling out in parallel, to ensure the cache can support the traffic needs of your application. Learn more about scaling ElastiCache clusters.

When designing your own cache, ElastiCache auto scaling gives you the ability to automatically increase or decrease the desired shards or replicas to maintain steady, predictable performance at the lowest possible cost. ElastiCache uses AWS Auto Scaling to manage scaling and CloudWatch metrics to determine when it is time to scale up or down.

Availability and reliability

ElastiCache offers a 99.99% Service Level Agreement (SLA) when using a multi-AZ or serverless configuration. ElastiCache Serverless automatically stores data redundantly across multiple Availability Zones, with no user configuration required. When designing your own cache cluster, you can take advantage of multiple AWS Availability Zones by creating replicas in multiple Availability Zones to achieve high availability and scale read traffic. In the case of primary node loss, AWS automatically detects the failure and failover to a read replica to provide higher availability without the need for manual intervention. Read more about high availability using replication groups and minimizing downtime in ElastiCache with multiple Availability Zones.

Global Datastore in ElastiCaches provides fully managed, fast, reliable, and secure replication across AWS Regions. With Global Datastore, you can write to your ElastiCache cluster in one Region and have the data available to be read from two other cross-Region replica clusters to enable low-latency reads and disaster recovery across AWS Regions. In the unlikely event of Regional degradation, one of the healthy cross-Region replica clusters can be promoted to become the primary cluster with full read and write capabilities. 

ElastiCache helps protect your data by creating snapshots of your clusters. You can set up automatic snapshots or initiate manual backups in a few steps in the console or through simple API calls. Using these snapshots, or any Valkey or Redis OSS RDB–compatible snapshot stored on Amazon Simple Storage Service (Amazon S3), you can then seed new ElastiCache clusters.

You can also export your snapshots to an Amazon S3 bucket of your choice for disaster recovery, analysis, or cross-Region backup and restore. Read more about ElastiCache backup and restore to protect your data.

ElastiCache continuously monitors the health of your instances. In the case a node experiences failure or a prolonged degradation in performance, ElastiCache will automatically restart or replace the node and associated processes.

Security and compliance

ElastiCache allows you to run your resources in Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC allows you to isolate your ElastiCache resources by specifying the IP ranges you wish to use for your nodes and to connect to other applications inside the same Amazon VPC. You can also use this service to configure firewall settings that control network access to your resources. Read more about Amazon VPC and ElastiCache security.

ElastiCache supports encryption in transit, which allows you to encrypt all communications between clients and your ElastiCache server as well as within the ElastiCache service boundary. ElastiCache also supports encryption at rest, which allows you to encrypt your disk usage and backups in Amazon S3. Learn more about encryption and ElastiCache data security. ElastiCache Serverless always encrypts data at rest and in transit using TLS. 

Additionally, ElastiCache provides AWS Key Management Service (AWS KMS) integration that allows you to use your own AWS KMS key for encryptions. Further, you can use the Valkey and Redis OSS AUTH command for an added level of authentication. You don't have to manage the lifecycle of certificates because ElastiCache automatically manages the issuance, renewal, and expiration of certificates.

ElastiCache supports authentication with AWS Identity and Access Management (IAM) authentication using IAM identities, Valkey or Redis OSS AUTH, and role-based access control (RBAC).

With IAM Authentication, you can authenticate a connection to ElastiCache using IAM identities to strengthen your security model and simplify many administrative security tasks. Valkey or Redis OSS authentication tokens, or passwords, enable Valkey or Redis OSS to require a password before allowing clients to run commands, thereby improving data security.

ElastiCache supports compliance with programs such as SOC 1, SOC 2, SOC 3, ISO, MTCS, C5, PCI, HIPAA, and FedRAMP. See AWS Services in Scope by Compliance Program for the current list of supported compliance programs.

Cost effective

With ElastiCache, you only pay for the resources you consume—with no upfront costs or long-term commitments. You are charged for data stored and compute consumed with ElastiCache Serverless and hourly based on the number of nodes, node type, and pricing model selected when designing your own cluster. You can further optimize costs on ElastiCache Serverless for Valkey with 33% reduced price and 90% lower minimum data storage of 100 MB. For ElastiCache for Valkey self-designed node-based, you can benefit from 20% lower cost per node. Visit the ElastiCache pricing page to learn more.

You can optimize your relational database costs with in-memory caching using ElastiCache. You can save up to 55% in cost and gain up to 80x faster read performance using ElastiCache with Amazon RDS for MySQL (compared to Amazon RDS for MySQL alone).

You can use data tiering for ElastiCache as a lower-cost way to scale your clusters up to hundreds of terabytes of capacity. Data tiering provides a price-performance option by using lower-cost SSDs in each cluster node in addition to storing data in memory.

It is ideal for workloads that access up to 20% of their overall dataset regularly and for applications that can tolerate additional latency when accessing data on SSD. ElastiCache data tiering is available when using ElastiCache version 7.2 for Valkey and above and ElastiCache version 6.2 for Redis OSS and above on AWS Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% savings when running at maximum utilization compared to R6g nodes (memory only).

ElastiCache reserved nodes provide you with a significant discount over on-demand usage when you commit to a one-year or three-year term. With reserved nodes, you can make a no upfront, partial upfront, or all upfront payment to create a reservation to run your node in a specific Region. These reservations are available in one-year or three-year increments and offer a significant discount off of the ongoing hourly usage charge. ElastiCache reserved nodes offer size flexibility within a node and AWS region. This means that the discounted reserved node rate will be automatically applied to usage of all sizes in the same node family. Read more about ElastiCache reserved nodes.

FAQs

ElastiCache is a web service that makes it easy to deploy and run Valkey, Memcached, and Redis OSS protocol-compliant server nodes in the cloud. ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, fully managed, in-memory system instead of relying entirely on slower disk-based systems.

ElastiCache simplifies and offloads the management, monitoring, and operation of in-memory environments, enabling your engineering resources to focus on developing applications. With ElastiCache, you can improve load and response times to user actions and queries and reduce the cost associated with scaling web applications.

Yes. ElastiCache Serverless allows customers to add a cache in under a minute and instantly scales capacity based on application traffic patterns. You can get started by specifying a cache name using the AWS Management Console, AWS SDKs, or AWS CLI. Visit our ElastiCache documentation to learn more.

ElastiCache is fully managed and automates common administrative tasks required to operate a distributed in-memory key-value environment.

With ElastiCache Serverless, you can create a highly available and scalable cache in less than a minute, removing the need to provision, plan for, and manage cache cluster capacity. ElastiCache Serverless automatically and redundantly stores data across three Availability Zones and provides a 99.99% availability Service Level Agreement (SLA). Through integration with CloudWatch monitoring, ElastiCache provides enhanced visibility into key performance metrics associated with your cache resources.

ElastiCache is protocol-compliant with Valkey, Memcached, and Redis OSS , so code, applications, and popular tools that you use with your existing Valkey, Memcached, and Redis OSS environments seamlessly work with the service. With the support of clustered configuration in ElastiCache, you get the benefits of a fast, scalable, and easy-to-use managed service that can meet the needs of your most demanding applications. With ElastiCache, you pay only for what you use—with no minimum fee, upfront costs, or long-term commitments.

In-memory caching improves application performance by storing frequently accessed data items in memory so that subsequent reads can be significantly faster than reading from the primary database that may default to disk-based storage. ElastiCache in-memory caching can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, Q&A portals) or compute-intensive workloads (such as a recommendation engine).

In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally intensive calculations.