Amazon ElastiCache features
Amazon ElastiCache is a fully managed, Redis- and Memcached-compatible service that delivers real-time, cost-optimized performance, and up to 99.99% high availability for modern applications. ElastiCache is ideal for high-performance use cases such as data caching, web, mobile apps, healthcare apps, financial apps, gaming, ad-tech, IoT, media streaming, session stores, leaderboards, machine learning (ML), and microservices-based applications. Refer to our Amazon ElastiCache for Redis use cases and Amazon ElastiCache for Memcached use cases to learn how ElastiCache can help.
ElastiCache speeds up database and application performance, scaling to hundreds of millions of operations per second with microsecond response time. Benefits include enhanced security, reliability, scalability, and performance when compared to open source alternatives. It also unlocks cost savings for read-heavy workloads and provides cost-optimization features like data tiering for memory-intensive workloads. Learn more about ElastiCache features and benefits below.
Get started in under a minute
With Amazon ElastiCache Serverless, you can create a highly available cache in under a minute without infrastructure provisioning or configuration. You can create an ElastiCache Serverless cache in a few steps by specifying a cache name in the AWS Management Console, AWS Software Development Kit (SDK), or AWS Command Line Interface (CLI).
No capacity planning
ElastiCache Serverless removes the complex, time-consuming process of capacity planning by continuously monitoring a cache’s compute, memory, and network use and instantly scaling vertically and horizontally to meet demand.
Pay-for-use billing model
With our pay-for-use billing model, you do not have to worry about how quickly ElastiCache Serverless scales back capacity after you scale down a workload. You only pay for the data you store and the compute your application uses. Visit the ElastiCache pricing page to learn more.
Easy to use
Quickly get started
With ElastiCache Serverless, you can simply create a new serverless cache in under a minute using the console, AWS CLI, or AWS SDKs, without needing to manage infrastructure or capacity. If you are designing your own cluster, resources are preconfigured with the appropriate parameters and settings, and cache parameter groups enable granular control for fine-tuning of your Redis or Memcached environment.
Fully managed Redis and Memcached
ElastiCache is a fully managed service. We automate time-consuming management tasks—such as capacity planning, software patch management, failure detection, and recovery—allowing you to pursue higher value application development. You get built-in access to the underlying in-memory database environment, making it straightforward to use ElastiCache with your existing Redis and Memcached tools and applications. With ElastiCache Serverless, all minor version updates, performance enhancements, and security patches are automatically applied with no configuration required and without application disruption.
Add a cache to your relational database
You can use the console for Amazon Relational Database Service (Amazon RDS) and Amazon Aurora to create an ElastiCache cluster and attach it to your relational database., By doing so, you can accelerate application performance with faster reads and reduce costs. Learn more about creating and attaching an ElastiCache cluster in Amazon RDS and Aurora documentation.
Amazon CloudWatch metrics provide insights to your ElastiCache resources at no additional charge. You can use the console to view over 40 key operational metrics for your instances, including compute, utilized memory, cache hit ratio, active connections, replication, and commands. To learn more about monitoring your cache cluster, refer to our documentation on monitoring CloudWatch metrics for ElastiCache for Redis and CloudWatch metrics for ElastiCache for Memcached.
ElastiCache publishes messages about notable events. ElastiCache Serverless events including new cache creation, deletions, and cache configuration updates are sent to Amazon EventBridge. When working with self-designed cache clusters, ElastiCache sends events to Amazon Simple Notification Service (Amazon SNS).
Benefit from the ability to tag your ElastiCache resources and Redis or Memcached snapshots for tracking and billing purposes. You can use AWS Cost Explorer to attribute costs to resources and Resource Groups to create and maintain collections of resources that share a common set of tags. To learn more about tagging your ElastiCache resources, refer to the documentation on ElastiCache for Redis tagging and ElastiCache for Memcached tagging.
Performance and scalability
Microsecond response times
ElastiCache helps improve application performance and increase throughput for read-heavy workloads by removing the need to access disk-based databases for frequently accessed data. ElastiCache can scale to millions of operations per second with microsecond response times.
High throughput and latency
ElastiCache for Redis version 7.1 delivers up to 100% more throughput and 50% lower P99 latency compared to ElastiCache for Redis version 7.0. You can achieve over 1 million requests per second per node, or 500 million requests per second per cluster, on r7g.4xlarge nodes or larger.
ElastiCache for Redis version 7.1 provides enhanced I/O threads that deliver significant improvements to throughput and latency at scale through multiplexing, presentation layer offloading, and more. Enhanced I/O threads are ideal for throughput-bound workloads with multiple client connections and its benefits scale with the level of workload concurrency. These improvements are illustrated in the diagram, showing work pushed to dedicated threads.
To get started with ElastiCache for Redis version 7.1, create a new cluster or upgrade an existing cluster using the ElastiCache console, at no additional cost. To learn more, visit the ElastiCache for Redis supported versions documentation and read our ElastiCache for Redis version 7.1 blog post.
Scale clusters to match demand
ElastiCache Serverless automatically and elastically scales to meet application performance demands. ElastiCache Serverless continuously monitors the memory, compute, and network bandwidth used on the cache by your application. It enables the cache to scale up in place, while also scaling out in parallel, to ensure the cache can support the traffic needs of your application. Learn more about scaling ElastiCache for Redis clusters and scaling ElastiCache for Memcached clusters.
Application auto scaling
When designing your own cache, ElastiCache for Redis auto scaling gives you the ability to automatically increase or decrease the desired shards or replicas in your ElastiCache for Redis service to maintain steady, predictable performance at the lowest possible cost. ElastiCache for Redis uses AWS Auto Scaling to manage scaling and CloudWatch metrics to determine when it is time to scale up or down.
Availability and reliability
High availability and multi-availability zones
ElastiCache offers a 99.99% Service Level Agreement (SLA) when using a multi-availability zone (multi-AZ) or serverless configuration. ElastiCache Serverless automatically stores data redundantly across multiple AZs, with no user configuration required. When designing your own cache cluster, you can take advantage of multiple AWS AZs by creating replicas in multiple AZs to achieve high availability and scale read traffic. In the case of primary node loss, AWS automatically detects the failure and failover to a read replica to provide higher availability without the need for manual intervention. Read more about high availability using replication groups and how you can minimize downtime in ElastiCache for Redis with multi-AZ.
Cross-Region disaster recovery with Global Datastore
Global Datastore in ElastiCache for Redis provides fully managed, fast, reliable, and secure replication across AWS Regions. With Global Datastore, you can write to your ElastiCache for Redis cluster in one Region and have the data available to be read from two other cross-Region replica clusters to enable low-latency reads and disaster recovery across AWS Regions. In the unlikely event of Regional degradation, one of the healthy cross-Region replica clusters can be promoted to become the primary cluster with full read and write capabilities.
Instance monitoring and repair
ElastiCache continuously monitors the health of your instances. In the case a node experiences failure or a prolonged degradation in performance, ElastiCache will automatically restart or replace the node and associated processes.
Backup, restore, and export
ElastiCache for Redis helps protect your data by creating snapshots of your clusters. You can set up automatic snapshots or initiate manual backups in a few steps in the console or through simple API calls. Using these snapshots, or any Redis RDB–compatible snapshot stored on Amazon Simple Storage Service (Amazon S3), you can then seed new ElastiCache for Redis clusters.
You can also export your snapshots to an Amazon S3 bucket of your choice for disaster recovery, analysis, or cross-Region backup and restore. Read more about ElastiCache for Redis backup and restore to protect your data.
Security and compliance
ElastiCache allows you to run your resources in Amazon Virtual Private Cloud (Amazon VPC). Amazon VPC allows you to isolate your ElastiCache resources by specifying the IP ranges you wish to use for your nodes, and to connect to other applications inside the same Amazon VPC. You can also use this service to configure firewall settings that control network access to your resources. Read more about Amazon VPC and ElastiCache for Redis security and Amazon VPC and ElastiCache for Memcached security.
Encryption in transit and at rest
ElastiCache supports encryption in transit, which allows you to encrypt all communications between clients and your ElastiCache server, as well as within the ElastiCache service boundary. ElastiCache also supports encryption at rest, which allows you to encrypt your disk usage and backups in Amazon S3. Learn more about encryption and ElastiCache for Redis data security and Elasticache for Memcached data security. ElastiCache Serverless always encrypts data at rest and in transit using transport layer security (TLS).
Additionally, ElastiCache provides AWS Key Management Service (AWS KMS) integration that allows you to use your own AWS KMS key for encryptions. Further, you can use the Redis AUTH command for an added level of authentication. You do not have to manage the lifecycle of certificates, as ElastiCache for Redis automatically manages the issuance, renewal, and expiration of certificates.
Redis authentication and access control
ElastiCache for Redis supports authentication with AWS Identity and Access Management (IAM) authentication using IAM identities, Redis AUTH, and role-based access control (RBAC).
With IAM Authentication, you can authenticate a connection to ElastiCache for Redis using AWS IAM identities to strengthen your security model and simplify many administrative security tasks. Redis authentication tokens, or passwords, enable Redis to require a password before allowing clients to run commands, thereby improving data security.
ElastiCache supports compliance with programs such as SOC 1, SOC 2, SOC 3, ISO, MTCS, C5, PCI, HIPAA, and FedRAMP. See AWS Services in Scope by Compliance Program for the current list of supported compliance programs.
You can use AWS PrivateLink to privately access ElastiCache from your Amazon VPC. PrivateLink allows you to privately access ElastiCache API operations without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Read more about ElastiCache for Redis API and interface VPC endpoints and ElastiCache for Memcached API and interface VPC endpoints.
Pay only for what you use
With ElastiCache, you only pay for the resources you consume with no upfront costs or long-term commitments. You are charged for data stored and compute consumed with ElastiCache Serverless and hourly based on the number of nodes, node type, and pricing model selected when designing your own cluster. Visit the ElastiCache pricing page to learn more.
Cost optimize your relational workloads
You can optimize your relational database costs with in-memory caching using ElastiCache. You can save up to 55% in cost and gain up to 80x faster read performance using ElastiCache with Amazon RDS for MySQL (compared to Amazon RDS for MySQL alone).
You can use data tiering for ElastiCache for Redis as a lower-cost way to scale your clusters up to hundreds of terabytes of capacity. Data tiering provides a price-performance option for Redis workloads by using lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory.
It is ideal for workloads that access up to 20 percent of their overall dataset regularly and for applications that can tolerate additional latency when accessing data on SSD. ElastiCache data tiering is available when using Redis version 6.2 and above on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% savings when running at maximum utilization compared to R6g nodes (memory only).
ElastiCache reserved nodes provide you with a significant discount over on-demand usage when you commit to a one-year or three-year term. With reserved nodes, you can make a no upfront, partial upfront, or all upfront payment to create a reservation to run your node in a specific Region. These reservations are available in one-year or three-year increments and offer a significant discount off of the ongoing hourly usage charge. Read more about ElastiCache for Redis reserved nodes and ElastiCache for Memcached reserved nodes.
What is ElastiCache used for?
ElastiCache is a web service that makes it easy to deploy and run Redis or Memcached protocol-compliant server nodes in the cloud. ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, fully managed, in-memory system, instead of relying entirely on slower disk-based systems.
ElastiCache simplifies and offloads the management, monitoring, and operation of in-memory environments, enabling your engineering resources to focus on developing applications. With ElastiCache, you can improve load and response times to user actions and queries and reduce the cost associated with scaling web applications.
Is ElastiCache serverless?
Yes. ElastiCache Serverless allows customers to add a cache in under a minute and instantly scales capacity based on application traffic patterns. You can easily get started by specifying a cache name using the console, SDKs, or AWS CLI. Visit our ElastiCache documentation to learn more.
What are the advantages of ElastiCache?
ElastiCache is fully managed and automates common administrative tasks required to operate a distributed in-memory key-value environment.
With ElastiCache Serverless, you can create a highly available and scalable cache in less than a minute, removing the need to provision, plan for, and manage cache cluster capacity. ElastiCache Serverless automatically and redundantly stores data across three Availability Zones and provides a 99.99% availability Service Level Agreement (SLA). Through integration with CloudWatch monitoring, ElastiCache provides enhanced visibility into key performance metrics associated with your cache resources.
ElastiCache is protocol-compliant with Redis and Memcached, so code, applications, and popular tools that you use with your existing Redis or Memcached environments seamlessly work with the service. With the support of clustered configuration in ElastiCache, you get the benefits of a fast, scalable, and an easy-to-use managed service that can meet the needs of your most demanding applications. With ElastiCache, you pay only for what you use with no minimum fee, upfront costs, or long-term commitments.
How can ElastiCache in-memory caching help my applications?
In-memory caching improves application performance by storing frequently accessed data items in memory, so that subsequent reads can be significantly faster than reading from the primary database that may default to disk-based storage. ElastiCache in-memory caching can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, Q&A portals) or compute-intensive workloads (such as a recommendation engine).
In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally intensive calculations.
How do I set up and get started with ElastiCache?
It’s straightforward to get started with ElastiCache. If you are not already signed up for ElastiCache, you can click the “Get started” button from the ElastiCache overview page to complete the sign-up process. You must have an AWS account; if you do not already have one, you will be prompted to create one when you begin the ElastiCache sign-up process.
Upon sign-up, new AWS customers receive 750 hours of ElastiCache cache.t2.micro or cache.t3.micro node usage for free for up to 12 months as part of the AWS Free Tier.
After you sign-up for ElastiCache, refer to the getting started guide for ElastiCache for Redis and getting started guide for ElastiCache for Memcached to learn how you can launch a cluster within minutes by using the console, AWS CLI, or ElastiCache APIs.