Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications. Using Amazon ElastiCache, you can not only improve load and response times to user actions and queries, but also reduce the cost associated with scaling web applications.
Amazon ElastiCache automates common administrative tasks required to operate a distributed cache environment. Using Amazon ElastiCache, you can add a caching layer to your application architecture in a matter of minutes via a few clicks of the AWS Management Console. Once a cache cluster is provisioned, Amazon ElastiCache automatically detects and replaces failed cache nodes, providing a resilient system that mitigates the risk of overloaded databases, which slow website and application load times. Through integration with Amazon CloudWatch monitoring, Amazon ElastiCache provides enhanced visibility into key performance metrics associated with your cache nodes. Amazon ElastiCache is protocol-compliant with Memcached and Redis, so code, applications, and popular tools that you use today with your existing Memcached or Redis environments will work seamlessly with the service. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources you use.
Q: What is in-memory caching and how does it help my applications?
The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine). In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations.
Q: What does Amazon ElastiCache manage on my behalf?
Amazon ElastiCache manages the work involved in setting up a distributed in-memory cache, from provisioning the server resources you request to installing the caching software. Once your cache environment is up and running, the service automates common administrative tasks such as failure detection and recovery, and software patching. Amazon ElastiCache provides detailed monitoring metrics associated with your Cache Nodes, enabling you to diagnose and react to issues very quickly. For example, you can set up thresholds and receive alarms if one of your Cache Nodes is overloaded with requests.
Q: What are Amazon ElastiCache Cache Nodes and Cache Clusters?
A Cache Node is the smallest building block of an Amazon ElastiCache deployment. It is a fixed-size chunk of secure, network-attached RAM. Each Cache Node runs an instance of the Memcached or Redis protocol-compliant service and has its own DNS name and port. Multiple types of Cache Nodes are supported, each with varying amount of associated memory.
Q: Which engines does Amazon ElastiCache support?
Amazon ElastiCache for Memcached currently supports Memcached 1.4.5 and 1.4.14.
Amazon ElastiCache for Redis currently supports Redis 2.6.13, 2.8.6 and 2.8.19.
Q: How do I get started with Amazon ElastiCache?
If you are not already signed up for Amazon ElastiCache, you can click the "Sign Up Now" button on the Amazon ElastiCache detail page and complete the sign-up process. You must have an Amazon Web Services account; if you do not already have one, you will be prompted to create one when you begin the Amazon ElastiCache sign-up process. After you are signed up for ElastiCache, please refer to the Amazon ElastiCache documentation, which includes our Getting Started Guide.
Once you have familiarized yourself with Amazon ElastiCache, you can launch a Cache Cluster within minutes by using the AWS Management Console or Amazon ElastiCache APIs.
Q: How do I create a Cache Cluster?
Cache Clusters are simple to create, using the AWS Management Console, Amazon ElastiCache APIs, or Command Line Tools. To launch a Cache Cluster using the AWS Management Console, click on the "Launch Cache Cluster" button on the "Amazon ElastiCache" tab. From there, all you need to specify is your Cache Cluster Identifier, Node Type, and Number of Nodes to create a Cache Cluster with the amount of memory you require. Alternatively, you can create your Cache Cluster using the CreateCacheCluster API or elasticache-create-cache-cluster command. If you do not specify an Availability Zone when creating a Cache Cluster, AWS will place it automatically based upon your memory requirements and available capacity.
Q: What Cache Node Types can I select?
Amazon ElastiCache supports Cache Nodes of the following types:
Current Generation Cache Nodes:
- cache.m3.medium: 2.78 GB
- cache.m3.large: 6.05 GB
- cache.m3.xlarge: 13.3 GB
- cache.m3.2xlarge: 27.9 GB
- cache.r3.large: 13.5 GB
- cache.r3.xlarge: 28.4 GB
- cache.r3.2xlarge: 58.2 GB
- cache.r3.4xlarge: 118 GB
- cache.r3.8xlarge: 237 GB
- cache.t2.micro: 555 MB
- cache.t2.small: 1.55 GB
- cache.t2.medium: 3.22 GB
- cache.m1.small: 1.3 GB
- cache.m1.medium: 3.35 GB
- cache.m1.large: 7.1 GB
- cache.m1.xlarge: 14.6 GB
- cache.m2.xlarge: 16.7 GB
- cache.m2.2xlarge: 33.8 GB
- cache.m2.4xlarge: 68 GB
- cache.t1.micro: 213 MB
- cache.c1.xlarge: 6.6 GB
Each Node Type above lists the memory available to Memcached or Redis after taking Amazon ElastiCache System Software overhead into account. The total amount of memory in a Cache Cluster is an integer multiple of the memory available for the Cache Node Type selected. For example, a Cache Cluster consisting of ten Cache Nodes of 6 GB each will provide 60 GB of total memory.
Q: How do I access my Cache Nodes?
Once your Cache Cluster is available, you can retrieve your Cache Node endpoints using the following steps on the AWS Management Console:
- Navigate to the "Amazon ElastiCache" tab.
- Click on the "(Number of) Nodes" link and navigate to the "Nodes" tab.
- Click on the "Copy Node Endpoint(s)" button.
Alternatively, you can use the DescribeCacheClusters API to retrieve the Endpoint list.
You can then configure your Memcached or Redis client with this endpoint list and use your favorite programming language to add or delete data from your ElastiCache Nodes. In order to allow network requests to your Cache Nodes, you will need to authorize access. For a detailed explanation to get started, please refer to our Getting Started Guide.
Q: What is a maintenance window? Will my Cache Nodes be available during software maintenance?
You can think of the Amazon ElastiCache maintenance window as an opportunity to control when software patching occurs, in the event either are requested or required. If a "maintenance" event is scheduled for a given week, it will be initiated and completed at some point during the 60 minute maintenance window you identify.
Your Cache Nodes could incur some downtime during your maintenance window if software patching is scheduled. Please refer to Cache Engine Version Management for more details. Patching can be user requested - for example cache software upgrade, or determined as required (if we identify any security vulnerabilities in the system or caching software). Software patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window. If you do not specify a preferred weekly maintenance window when creating your Cache Cluster, a 60 minute default value is assigned. If you wish to modify when maintenance is performed on your behalf, you can do so by modifying your DB Instance in the AWS Management Console or by using the ModifyCacheCluster API. Each of your Cache Clusters can have different preferred maintenance windows, if you so choose.
You pay only for what you use and there is no minimum fee. Pricing is per Cache Node-hour consumed for each Node Type. Partial Node-hours consumed are billed as full hours. There is no charge for data transfer between Amazon EC2 and Amazon ElastiCache within the same Availability Zone. While standard Amazon EC2 Regional Data Transfer charges apply when transferring data between an Amazon EC2 instance and an Amazon ElastiCache Node in different Availability Zones of the same Region, you are only charged for the Data Transfer in or out of the Amazon EC2 instance. There is no Amazon ElastiCache Data Transfer charge for traffic in or out of the Amazon ElastiCache Node itself. For more information, please visit the pricing page.
Q: When does billing of my Amazon ElastiCache Nodes begin and end?
Billing commences for a Cache Node as soon as the Cache Node is available. Billing continues until the Cache Node is terminated, which would occur upon deletion.
Q: What defines billable ElastiCache Node hours?
Node hours are billed for any time your Cache Nodes are running in an "Available" state. If you no longer wish to be charged for your Cache Node, you must terminate it to avoid being billed for additional Node hours.
Q: Do your prices include taxes?
Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of the Asia Pacific (Tokyo) Region is subject to Japanese Consumption Tax. Learn more.
With Reserved Cache Nodes, you can now make a one-time, up-front payment to create a one or three year reservation to run your Cache Node in a specific Region and receive a significant discount off of the ongoing hourly usage charge. There are three Reserved Cache Node types (Light, Medium, and Heavy Utilization Reserved Cache Nodes) that enable you to balance the amount you pay upfront with your effective hourly price.
Q: How are Reserved Cache Nodes different from On-Demand Cache Nodes?
Functionally, Reserved Cache Nodes and On-Demand Cache Nodes are exactly the same. The only difference is how your Cache Node(s) are billed; with Reserved Cache Nodes, you make a one-time up-front payment and receive a lower ongoing hourly usage rate (compared with On-Demand Cache Nodes) for the duration of the term.
Q: How do I purchase and create Reserved Cache Nodes?
You can use the "Purchase Reserved Cache Nodes" option in the AWS Management Console. Alternatively, you can use the API tools to list the reservations available for purchase with the DescribeReservedCacheNodesOfferings API method and then purchase a cache node reservation by calling the PurchaseReservedCacheNodesOffering method.
Creating a Reserved Cache Node is no different than launching an On-Demand Cache Node. You simply specify the Cache Node class and Region for which you made the reservation. So long as your reservation purchase was successful, Amazon ElastiCache will apply the reduced hourly rate for which you are eligible to the new Cache Node.
Q: Will there always be reservations available for purchase?
Yes. Reserved Cache Nodes are purchased for the Region rather than for the Availability Zone. This means that even if capacity is limited in one Availability Zone, reservations can still be purchased in that Region and used in a different Availability Zone within that Region.
Q: How many Reserved Cache Nodes can I purchase?
You can purchase up to 20 Reserved Cache Nodes. If you wish to run more than 20 Cache Nodes please complete the Amazon ElastiCache Cache Node request form.
Q: What if I have an existing Cache Node that I’d like to convert to a Reserved Cache Node?
Simply purchase a Cache Node reservation with the same Cache Node class, within the same Region as the Cache Node you are currently running and would like to reserve. If the reservation purchase is successful, Amazon ElastiCache will automatically apply your new hourly usage charge to your existing Cache Node.
Q: If I sign up for a Reserved Cache Node, when does the term begin? What happens to my Cache Node when the term ends?
Pricing changes associated with a Reserved Cache Node are activated once your request is received while the payment authorization is processed. You can follow the status of your reservation on the AWS Account Activity page or by using the DescribeReservedCacheNodes API. If the one-time payment cannot be successfully authorized by the next billing period, the discounted price will not take effect.
When your reservation term expires, your Reserved Cache Node will revert to the appropriate On-Demand hourly usage rate for your Cache Node class and Region.
Q: How do I control which Cache Nodes are billed at the Reserved Cache Node rate?
The Amazon ElastiCache APIs for creating, modifying, and deleting Cache Nodes do not distinguish between On-Demand and Reserved Cache Nodes so that you can seamlessly use both. When computing your bill, our system will automatically apply your Reservation(s), such that all eligible Cache Nodes are charged at the lower hourly Reserved Cache Node rate. Q: Can I move a Reserved Cache Node from one Region or Availability Zone to another? Each Reserved Cache Node is associated with a specific Region, which is fixed for the lifetime of the reservation and cannot be changed. Each reservation can, however, be used in any of the available AZs within the associated Region. Q: Can I cancel a reservation? The one-time payment for Reserved Cache Nodes is not refundable. However, you can choose to terminate your Cache Node at any time, at which point you will not incur any hourly usage charges if you are using Light and Medium Utilization Reserved Cache Nodes.
When not using VPC, Amazon ElastiCache allows you to control access to your Cache Clusters through Cache Security Groups. A Cache Security Group acts like a firewall, controlling network access to your Cache Cluster. By default, network access is turned off to your Cache Clusters. If you want your applications to access your Cache Cluster, you must explicitly enable access from hosts in specific EC2 security groups. This process is called ingress.
To allow network access to your Cache Cluster, create a Cache Security Group and link the desired EC2 security groups (which in turn specify the EC2 instances allowed) to it. The Cache Security Group can be associated with your Cache Cluster at the time of creation, or using the "Modify" option on the AWS Management Console.
Please note that IP-range based access control is currently not enabled for Cache Clusters. All clients to a Cache Cluster must be within the EC2 network, and authorized via security groups as described above.
When using VPC, please see here for more information.
Q: Can programs running on servers in my own data center access Amazon ElastiCache?
No. Currently, all clients to an ElastiCache Cluster must be within the Amazon EC2 network, and authorized via security groups as described here.
Q: Can programs running on EC2 instances in a VPC access Amazon ElastiCache?
Yes, EC2 instances in a VPC can access Amazon ElastiCache if the ElastiCache cluster was created within the VPC. Details on how to create an Amazon ElastiCache cluster within a VPC are given here.
Amazon VPC lets you create a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud, where you can exercise complete control over aspects such as private IP address ranges, subnets, routing tables and network gateways. With Amazon VPC, you can define a virtual network topology and customize the network configuration to closely resemble a traditional IP network that you might operate in your own datacenter.
One of the scenarios where you may want to use Amazon ElastiCache in a VPC is if you want to run a public-facing web application, while still maintaining non-publicly accessible backend servers in a private subnet. You can create a public-facing subnet for your webservers that has access to the Internet, and place your backend infrastructure in a private-facing subnet with no Internet access. Your backend infrastructure could include RDS DB Instances and an Amazon ElastiCache Cluster providing the caching layer. For more information about Amazon VPC, refer to the Amazon Virtual Private Cloud User Guide.
Q: How do I create an Amazon ElastiCache Cluster in VPC?
For a walk through example of creating an Amazon ElastiCache Cluster in VPC, refer to the Amazon ElastiCache User Guide.
Following are the pre-requisites necessary to create a Cache Cluster within a VPC:
- You need to have a VPC set up with at least one subnet. For information on creating Amazon VPC and subnets refer to the Getting Started Guide for Amazon VPC.
- You need to have a Cache Subnet Group defined for your VPC.
- You need to have a VPC Security Group defined for your VPC (or you can use the default provided).
- In addition, you should allocate adequately large CIDR blocks to each of your subnets so that there are enough spare IP addresses for Amazon ElastiCache to use during maintenance activities such as cache node replacement.
Q: How do I create an Amazon ElastiCache Cluster in an existing VPC?
Creating an Amazon ElastiCache Cluster in an existing VPC is the same as that for a newly created VPC. Please see this for more details.
Q: How do I connect to an ElastiCache Node in VPC?
Amazon ElastiCache Nodes, deployed within a VPC, can be accessed by EC2 Instances deployed in the same VPC. If these EC2 Instances are deployed in a public subnet with associated Elastic IPs, you can access the EC2 Instances via the internet.
Amazon ElastiCache Nodes, deployed within a VPC, can never be accessed from the Internet or from EC2 Instances outside the VPC.
We strongly recommend you use the DNS Name to connect to your ElastiCache Node as the underlying IP address can change (e.g., after a cache node replacement).
Q: What is a Cache Subnet Group and why do I need one?
A Cache Subnet Group is a collection of subnets that you must designate for your Amazon ElastiCache Cluster in a VPC. A Cache Subnet Group is created using the Amazon ElastiCache Console. Each Cache Subnet Group should have at least one subnet. Amazon ElastiCache uses the Cache Subnet Group to select a subnet. The IP Addresses from the selected subnet are then associated with the Cache Node Endpoints. Furthermore, Amazon ElastiCache creates and associates Elastic Network Interfaces to cache nodes with the previously mentioned IP addresses.
Please note that, we strongly recommend you use the DNS Names to connect to your cache nodes as the underlying IP addresses can change (e.g., after cache node replacement).
Q: Can I change the Cache Subnet Group of my ElastiCache Cluster?
An existing Cache Subnet Group can be updated to add more subnets either for existing Availability Zones are for new Availability Zones added since the creation of the ElastiCache Cluster. However, changing the Cache Subnet Group of a deployed Cache Cluster is not currently allowed.
Q: How is using Amazon ElastiCache inside a VPC different from using it outside?
The basic functionality of Amazon ElastiCache remains the same whether VPC is used or not. Amazon ElastiCache manages automatic failure detection, recovery, scaling, auto discovery, and software patching whether your ElastiCache Cluster is inside or outside a VPC.
Similarly, an Amazon ElastiCache Cluster, inside or outside a VPC, is never allowed to be accessed from the Internet. Within a VPC, nodes of an ElastiCache cluster only have a private IP address (within a subnet that you define). Outside of a VPC, the access to the ElastiCache cluster can be controlled using Cache Security Groups as described here.
Q: Can I move my existing ElastiCache Cluster from outside VPC into my VPC?
No, you cannot move an existing Amazon ElastiCache Cluster from outside VPC into a VPC. You will need to create a new Amazon ElastiCache Cluster inside the VPC.
Q: Can I move my existing ElastiCache Cluster from inside VPC to outside VPC?
Currently, direct migration of ElastiCache Cluster from inside to outside VPC is not supported. You will need to create a new Amazon ElastiCache Cluster outside VPC.
Q: How do I control network access to my Cache Cluster?
Amazon ElastiCache allows you to control access to your Cache Cluster and therefore the Cache Nodes using Cache Security Groups in non-VPC deployments. A Cache Security Group acts like a firewall controlling network access to your Cache Node. By default, network access is turned off to your Cache Nodes. If you want your applications to access your Cache Node, you can set your Cache Security Group to allow access from EC2 Instances with specific EC2 Security Group membership or IP ranges. This process is called ingress. Once ingress is configured for a Cache Security Group, the same rules apply to all Cache Nodes associated with that Cache Security Group. Cache Security Groups can be configured with the “Cache Security Groups” section of the Amazon ElastiCache Console or using the Amazon ElastiCache APIs.
In VPC deployments, access to your cache nodes is controlled using the VPC Security Group and the Cache Subnet Group. The VPC Security Group is the VPC equivalent of the Cache Security Group.
Q: What precautions should I take to ensure that my ElastiCache Nodes in VPC are accessible by my application?
You are responsible for modifying routing tables and networking ACLs in your VPC to ensure that your ElastiCache Nodes are reachable from your client instances in the VPC. To learn more see the Amazon ElastiCache Documentation.
Q: Can I use Cache Security Groups to configure the cache clusters that are part of my VPC?
No, Cache Security Groups are not used when operating in a VPC. Instead they are used in the non VPC settings. When creating a cache cluster in a VPC you will need to use VPC Security Groups.
Q: Can I associate a regular EC2 security group with a cache cluster that is launched within a VPC?
No, you can only associate VPC security groups that are part of the same VPC as your Cache Cluster.
Q: Can Cache Nodes of an ElastiCache cluster span multiple subnets?
Yes, Cache Nodes of an Amazon ElastiCache cluster can span multiple subnets as long as the subnets are part of the same Cache Subnet Group that was associated with the ElastiCache Cluster at creation time.
A Cache Parameter Group acts as a "container" for engine configuration values that can be applied to one or more Cache Clusters. If you create Cache Cluster without specifying a Cache Parameter Group, a default Cache Parameter Group is used. This default group contains engine defaults and Amazon ElastiCache system defaults optimized for the Cache Cluster you are running. However, if you want your Cache Cluster to run with your custom-specified engine configuration values, you can simply create a new Cache Parameter Group, modify the desired parameters, and modify the Cache Cluster to use the new Cache Parameter Group. Once associated, all Cache Clusters that use a particular Cache Parameter Group get all the parameter updates to that Cache Parameter Group. For more information on configuring Cache Parameter Groups, please refer to the Amazon ElastiCache User Guide.
Q: How do I choose the right configuration parameters for my Cache Cluster(s)?
Amazon ElastiCache by default chooses the optimal configuration parameters for your Cache Cluster taking into account the Node Type's memory/compute resource capacity. However, if you want to change them, you can do so using our configuration management APIs. Please note that changing configuration parameters from recommended values can have unintended effects, ranging from degraded performance to system crashes, and should only be attempted by advanced users who wish to assume these risks. For more information on changing parameters, please refer to the Amazon ElastiCache User Guide.
Q: How do I see the current setting for my parameters for a given Cache Parameter Group?
You can use the AWS Management Console, Amazon ElastiCache APIs, or Command Line Tools to see information about your Cache Parameter Groups and their corresponding parameter settings.
You can cache a variety of objects using the service, from the content in persistent data stores (such as Amazon RDS, SimpleDB, or self-managed databases hosted on EC2) to dynamically generated web pages (with Nginx for example), or transient session data that may not require a persistent backing store. You can also use it to implement high-frequency counters to deploy admission control in high volume web applications.
Q: Can I use Amazon ElastiCache for Memcached with an AWS persistent data store such as Amazon SimpleDB or Amazon RDS?
Yes, Amazon ElastiCache is an ideal front-end for data stores like Amazon SimpleDB and Amazon RDS, providing a high-performance middle tier for applications with extremely high request rates and/or low latency requirements.
Q: I use Memcached today. How do I migrate to Amazon ElastiCache?
Amazon ElastiCache is protocol-compliant with Memcached. Therefore, you can use standard Memcached operations like get, set, incr and decr in exactly the same way as you would in your existing Memcached deployments. Amazon ElastiCache supports both the text and binary protocols. It also supports most of the standard stats results, which can also be viewed as graphs via CloudWatch. As a result, you can switch to using Amazon ElastiCache without recompiling or re-linking your applications - the libraries you use will continue to work. To configure the cache servers your application accesses, all you will need to do is to update your application's Memcached config file to include the endpoints of the servers (Cache Nodes) we provision for you. You can simply use the "Copy Node Endpoints" option on the AWS Management Console or the "DescribeCacheClusters" API to get a list of the endpoints. As with any migration process, we recommend thorough testing of your new Amazon ElastiCache deployment before completing the cut over from your current solution.
Please note that Amazon ElastiCache currently allows access only from the Amazon EC2 network, so in order to use the service, you should have your application servers in Amazon EC2.
Amazon ElastiCache uses DNS entries to allow client applications to locate cache servers (Cache Nodes). The DNS name for a Cache Node remains constant, but the IP address of a Cache Node can change over time, for example, when Cache Nodes are auto replaced after a failure. See this FAQ for recommendations to deal with Cache Node failures.
Though there is no precise answer for this question, with Amazon ElastiCache, you don't need to worry about getting the number of Cache Nodes exactly right, as you can very easily add or remove Nodes later. The following two inter-related aspects could be considered for the choice of your initial configuration:
- The total memory required for your cache to achieve your target cache-hit rate, and
- The number of Cache Nodes required to maintaining acceptable application performance without overloading the database backend in the event of Cache Node failure(s).
The amount of memory required is dependent upon the size of your data set and the access patterns of your application. To improve fault tolerance, once you have a rough idea of the total memory required, divide that memory into enough Cache Nodes such that your application can survive the loss of one or two Cache Nodes. For example, if your memory requirement is 14GB, you may want to use two cache.m1.large nodes instead of using one cache.m1.xlarge node. It is important that other systems such as databases will not be overloaded if the cache-hit rate is temporarily reduced during failure recovery of one or more of Cache Nodes. Please refer to the Amazon ElastiCache User Guide for more details.
Q: Can a Cache Cluster span multiple Availability Zones?
Yes. When creating a cache cluster or adding nodes to an existing cluster, you can chose the availability zones for the new nodes. You can either specify the requested amount of nodes in each availability zones or select “spread nodes across zones”. If the cluster is in VPC, nodes can only be placed in availability zones that are part of the selected cache subnet group. For additional details please see ElastiCache VPC documentation.
Q: How many Cache Nodes can I run in Amazon ElastiCache?
You can run a maximum of 20 Cache Nodes per region. If you need more Cache Nodes, please fill in the ElastiCache Limit Increase Request form.
Q: How does Amazon ElastiCache respond to Cache Node failure?
The service will detect the Cache Node failure and react with the following automatic steps:
- Amazon ElastiCache will repair the Cache Node by acquiring new service resources, and will then redirect the Cache Node's existing DNS name to point to the new service resources. Thus, the DNS name for a Cache Node remains constant, but the IP address of a Cache Node can change over time.
- If you associated an SNS topic with your Cache Cluster, when the new Cache Node is configured and ready to be used, Amazon ElastiCache will send an SNS notification to let you know that Cache Node recovery occurred. This allows you to optionally arrange for your applications to force the Memcached client library to attempt to reconnect to the repaired Cache Nodes. This may be important, as some Memcached libraries will stop using a server (Cache Node) indefinitely if they encounter communication errors or timeouts with that server.
Q: If I determine that I need a larger cache to support my application, how do I increase the total memory with Amazon ElastiCache?
You could add more Cache Nodes to your existing Cache Cluster by using the "Add Node" option on "Nodes" tab for your Cache Cluster on the AWS Management Console or calling the ModifyCacheCluster API.
Amazon ElastiCache is ideally suited as a front-end for Amazon Web Services like Amazon RDS and Amazon SimpleDB, providing extremely low latency for high performance applications and offloading some of the request volume while these services provide long lasting data durability. The service can also be used to improve application performance in conjunction with Amazon EC2 and EMR.
Q: Is Amazon ElastiCache better suited to any specific programming language?
Memcached client libraries are available for many, if not all of the popular programming languages. See the list of available clients at the Memcached project page here. If you encounter any issues with specific Memcached clients when using Amazon ElastiCache, please engage us via the Amazon ElastiCache community forum.
Q: What popular Memcached libraries are compatible with Amazon ElastiCache?
Amazon ElastiCache does not require specific client libraries and works with existing Memcached client libraries without recompilation or application re-linking (Memcached 1.4.5 and later); examples include libMemcached (C) and libraries based on it (e.g. PHP, Perl, Python), spyMemcached (Java) and fauna (Ruby).
Auto Discovery is a feature that saves developers time and effort, while reducing complexity of their applications. Auto Discovery enables automatic discovery of cache nodes by clients when they are added to or removed from an Amazon ElastiCache cluster. Until now to handle cluster membership changes, developers must update the list of cache node endpoints manually. Depending on how the client application is architected, typically a client initialization, by shutting down the application and restarting it, is needed resulting in downtime. Through Auto Discovery we are eliminating this complexity. With Auto Discovery, in addition to being backwards protocol-compliant with the Memcached protocol, Amazon ElastiCache provides clients with information on cache cluster membership. A client capable of processing the additional information reconfigures itself, without any initialization, to use the most current nodes of an Amazon ElastiCache cluster.
Q: How does Auto Discovery work?
An Amazon ElastiCache cluster can be created with nodes that are addressable via named endpoints. With Auto Discovery the Amazon ElastiCache cluster is also given a unique Configuration Endpoint which is a DNS Record that is valid for the lifetime of the cluster. This DNS Record contains the DNS Names of the nodes that belong to the cluster. Amazon ElastiCache will ensure that the Configuration Endpoint always points to at least one such “target” node. A query to the target node then returns endpoints for all the nodes of the cluster in question. After this, you can connect to the cluster nodes just as before and use the Memcached protocol commands such as get, set, incr and decr. For more details, see here. To use Auto Discovery, you will need an Auto Discovery capable client. Auto Discovery clients for Java and PHP are available for download from the Amazon ElastiCache console. Upon initialization, the client will automatically determine the current members of the Amazon ElastiCache cluster using the Configuration Endpoint. When you make changes to your cache cluster by adding or removing nodes or if a node is replaced upon failure, the Auto Discovery client automatically determines the changes and you do not need to initialize your clients manually.
Q: How can I get started using Auto Discovery?
To get started, download the Amazon ElastiCache Cluster Client by clicking the “Download ElastiCache Cluster Client” link on the Amazon ElastiCache console. Before you can download, you must have an Amazon ElastiCache account; if you do not already have one, you can sign up from the Amazon ElastiCache detail page. After you download the client, you can begin setting up and activating your Amazon ElastiCache cluster by visiting the Amazon ElastiCache console. More details can be found here.
Q: If I continue to use my own Memcached clients with my ElastiCache cluster – will I be able to get this feature?
No, you will not get the Auto Discovery feature with the existing Memcached clients. To use the Auto Discovery feature a client must be able to use a Configuration Endpoint and determine the cluster node endpoints. You may either use the Amazon ElastiCache Cluster Client or extend your existing Memcached client to include the Auto Discovery command set.
Q: What are the minimum hardware / software requirements for Auto Discovery?
To take advantage of Auto Discovery, an Auto Discovery capable client must be used to connect to an Amazon ElastiCache Cluster. Amazon ElastiCache currently supports Auto Discovery capable clients for both Java and PHP. These can be downloaded from the Amazon ElastiCache console. Our customers can create clients for any other language by building upon the popular Memcached clients available.
Q: How do I modify or write my own Memcached client to support auto-discovery?
You can take any Memcached Client Library and add support for Auto Discovery. If you would like to add or modify your own client to enable Auto Discovery, please refer to the Auto Discovery command set documentation.
Q: Are you planning to add support for more languages?
Yes, we are looking at Ruby next and may add more languages after that.
Q: Can I continue to work with my existing Memcached client if I don’t need Auto-discovery?
Yes, Amazon ElastiCache is still Memcached protocol compliant and does not require you to change your clients. However, for taking advantage of auto-discovery feature, we had to enhance the Memcached client capabilities. If you choose to not use the Amazon ElastiCache Cluster Client, you can continue to use your own clients or modify your own client library to understand the auto-discovery command set.
Q: Can I have heterogeneous clients when using Auto Discovery?
Yes, the same Amazon ElastiCache cluster can be connected through an Auto Discovery capable Client and the traditional Memcached client at the same time. Amazon ElastiCache remains 100% Memcached compliant.
Q: Can I stop using Auto Discovery?
Yes, you can stop using Auto Discovery anytime. You can disable Auto Discovery by specifying the mode of operation during the Amazon ElastiCache Cluster client initialization. Also, since Amazon ElastiCache continues to support Memcached 100% you may use any Memcached protocol-compliant client as before.
Amazon ElastiCache allows you to control if and when the Memcached protocol-compliant software powering your Cache Cluster is upgraded to new versions supported by Amazon ElastiCache. This provides you with the flexibility to maintain compatibility with specific Memcached versions, test new versions with your application before deploying in production, and perform version upgrades on your own terms and timelines. Version upgrades involve some compatibility risk, thus they will not occur automatically and must be initiated by you. This approach to cache software patching puts you in the driver's seat of version upgrades, but still offloads the work of patch application to Amazon ElastiCache. You can learn more about version management by reading the FAQs that follow. Alternatively, you can refer to the Amazon ElastiCache User Guide. While Cache Engine Version Management functionality is intended to give you as much control as possible over how patching occurs, we may patch your Cache Cluster on your behalf if we determine there is any security vulnerability in the system or cache software.
Q: How do I specify which supported Memcached Version my Cache Cluster should run?
You can specify any currently supported version (minor and/or major) when creating a new Cache Cluster. If you wish to initiate an upgrade to a supported engine version release, you can do so using the "Modify" option for your Cache Cluster. Simply specify the version you wish to upgrade to via the "Cache Engine Version" field. The upgrade will then be applied on your behalf either immediately (if the "Applied Immediately" option is checked) or during the next scheduled maintenance window for your Cache Cluster.
Q: Can I test my Cache Cluster against a new version before upgrading?
Yes. You can do so by creating a new Cache Cluster with the new Cache Engine Version. You can point your development/staging application to this Cache Cluster, test it and decide whether or not to upgrade your original Cache Cluster.
Q: Does Amazon ElastiCache provide guidelines for supporting new Memcached version releases and/or deprecating versions that are currently supported?
Over time, we plan to support additional Memcached versions for Amazon ElastiCache, both major and minor. The number of new version releases supported in a given year will vary based on the frequency and content of the Memcached version releases and the outcome of a thorough vetting of the release by our engineering team. However, as a general guidance, we aim to support new Memcached versions within 3-5 months of their General Availability release.
In terms of deprecation policy:
We intend to support major Memcached version releases, including 1.4, for 2 years after they are initially supported by Amazon ElastiCache.
We intend to support minor Memcached version releases (e.g., 1.4.5) for at least 1 year after they are initially supported by Amazon ElastiCache.
After a Memcached major or minor version has been "deprecated", we expect to provide a three month grace period for you to initiate an upgrade to a supported version prior to an automatic upgrade being applied during your scheduled maintenance window.
Q: Which version of the Memcached wire protocol does Amazon ElastiCache support?
Amazon ElastiCache supports the Memcached text and binary protocol as of version 1.4.5 of Memcached.
Amazon ElastiCache for Redis is a web service that makes it easy to deploy and run Redis protocol-compliant server nodes in the cloud. The service enables the management, monitoring and operation of a Redis node; creation, deletion and modification of the node can be carried out through the ElastiCache console, the command line interface or the web service APIs. Amazon ElastiCache for Redis supports Redis Master / Slave replication.
Q: Is Amazon ElastiCache for Redis protocol-compliant with open source Redis?
Yes, Amazon ElastiCache for Redis is protocol-compliant with open source Redis. Code, applications, drivers and tools a customer uses today with their existing standalone Redis data store will continue to work with ElastiCache for Redis and no code changes will be required for existing Redis deployments migrating to ElastiCache for Redis unless noted. We currently support Redis 2.6.13, 2.8.6 and 2.8.19.
Q: What are Amazon ElastiCache for Redis nodes, clusters, and replications groups?
An ElastiCache for Redis node is the smallest building block of an Amazon ElastiCache for Redis deployment. Each ElastiCache for Redis node supports the Redis protocol and has its own DNS name and port. Multiple types of ElastiCache for Redis nodes are supported, each with varying amount of CPU capability, and associated memory. An ElastiCache for Redis node may take on a primary or a read replica role. A primary node can be replicated to multiple read replica nodes. An ElastiCache for Redis cluster is a collection of one or more ElastiCache for Redis nodes of the same role; the primary node will be in the primary cluster and the read replica node will be in a read replica cluster. At this time a cluster can only have one node. In the future, we will increase this limit. A cluster manages a logical key space, where each node is responsible for a part of the key space. Most of your management operations will be performed at the cluster level. An ElastiCache for Redis replication group encapsulates the primary and read replica clusters for a Redis installation. A replication group will have only one primary cluster and zero or many read replica clusters. All nodes within a replication group (and consequently cluster) will be of the same node type, and have the same parameter and security group settings.
Q: Does Amazon ElastiCache for Redis support Redis persistence?
Yes, you can achieve persistence in multiple ways:
- Snapshotting your Redis data using the Backup and Restore feature. Please see here for details.
- You can attach a Redis slave running on an EC2 instance to an ElastiCache for Redis primary node. The Redis slave can be used to generate RDB snapshots and / or AOF append logs as needed, and you may transfer these files to S3 for durability. See here for how to set this up.
- Your application can call the Redis SYNC command on an ElastiCache for Redis node to retrieve the node’s contents.
Q: How can I migrate from Amazon ElastiCache for Memcached to Amazon ElastiCache for Redis and vice versa?
We currently do not support automatically migrating from Memcached to Redis or vice versa. You may, however, use a Memcached client to read from a Memcached cluster and use a Redis client to write to a Redis cluster. Similarly, you may read from a Redis cluster using a Redis client and use a Memcached client to write to a Memcached cluster. Make sure to consider the differences in data format, and cluster configuration between the two engines.
Q: Does Amazon ElastiCache for Redis support Multi-AZ operation?
Yes, with Amazon ElastiCache for Redis you can create a read replica in another AWS Availability Zone. Upon a failure of the primary node, we will provision a new primary node. In scenarios where the primary node cannot be provisioned, you can decide which read replica to promote to be the new primary. For more details on how to handle node failures see here.
Q: What options does Amazon ElastiCache for Redis provide for node failures?
Amazon ElastiCache for Redis will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time. If you have a replication group with one or more read replicas and Multi-AZ is enabled, then in case of primary node failure ElastiCache will automatically detect the failure, select a replica and promote it to become the new primary. It will also propagate the DNS so that you can continue to use the primary endpoint and after the promotion it will point to the newly promoted primary. For more details see the Multi-AZ section of this FAQ. When Redis replication option is selected with Multi-AZ disabled, in case of primary node failure you will be given the option to initiate a failover to a read replica node. The failover target can be in the same zone or another zone. To failback to the original zone, promote the read replica in the original zone to be the primary. You may choose to architect your application to force the Redis client library to reconnect to the repaired Redis server node. This can help as some Redis libraries will stop using a server indefinitely when they encounter communication errors or timeouts.
Q: How does failover work?
For Multi-AZ enabled replication groups, the failover behavior is described at the Multi-AZ section of this FAQ.
If you choose not to enable Multi-AZ, then if Amazon ElastiCache monitors the primary node, and in case the node becomes unavailable or unresponsive, Amazon ElastiCache for Redis will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time. However, if the primary node cannot be healed (and your Multi-AZ is disabled) you will have the choice to promote one of the read replicas to be the new primary. See here for how to select a new primary. The DNS record of the primary’s endpoint will be updated to point to the promoted read replica node. A read replica node in the original primary’s AZ will then be created to be a read replica in the replication group and will follow the new primary.
Q: Are my read replicas available during a primary node failure?
Yes, during a primary node failure, the read replicas continue to service requests. After the primary node is restored, either as a healed node or as a promoted read replica, there is a brief period during which the read replicas will not serve any requests as they sync the cache information from the primary.
Q: How do I configure parameters of my Amazon ElastiCache for Redis nodes?
You can configure your Redis installation using a cache parameter group, which must be specified for a Redis cluster. All read replica clusters use the parameter group of their primary cluster. A Redis parameter group acts as a “container” for Redis configuration values that can be applied to one or more Redis primary clusters. If you create a Redis primary cluster without specifying a cache parameter group, a default parameter group is used. This default group contains defaults for the node type you plan to run. However, if you want your Redis primary cluster to run with specified configuration values, you can simply create a new cache parameter group, modify the desired parameters, and modify the primary Redis cluster to use the new parameter group.
Q: Can I access Redis through the Amazon ElastiCache console?
Yes, Redis appears as an Engine option in the ElastiCache console. You can create a new Redis cache cluster with the Launch Wizard by choosing the Redis engine. You can also modify or delete an existing Redis cluster using the ElastiCache console.
Q: Can Amazon ElastiCache for Redis clusters be created in an Amazon VPC?
Yes, just as you can create Memcached clusters within a VPC, you can create Redis clusters within a VPC as well. If your account is a VPC by default account, your Redis clusters will be created within the default VPC associated with your account. Using the ElastiCache console, you can specify a different VPC when you create your cluster.
Q: Is Redis password functionality supported in Amazon ElastiCache for Redis?
No, Amazon ElastiCache for Redis does not support Redis passwords. This is because of the inherent limitations of passwords stored in a configuration file. Instead of relying on Redis passwords, ElastiCache for Redis clusters are associated with an EC2 security group, and only clients within this security group have access to the Redis server.
Read Replicas serve two purposes in Redis:
- Failure Handing
- Read Scaling
When you run a Cache Node with a Read Replica, the “primary” serves both writes and reads. The Read Replica acts as a “standby” which is “promoted” in failover scenarios. After failover, the standby becomes the primary and accepts your cache operations. Read Replicas also make it easy to elastically scale out beyond the capacity constraints of a single Cache Node for read-heavy cache workloads.
Q: When would I want to consider using a Redis read replica?
There are a variety of scenarios where deploying one or more read replicas for a given primary node may make sense. Common reasons for deploying a read replica include:
- Scaling beyond the compute or I/O capacity of a single primary node for read-heavy workloads. This excess read traffic can be directed to one or more read replicas.
- Serving read traffic while the primary is unavailable. If your primary node cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your read replicas. For this use case, keep in mind that the data on the read replica may be “stale” since the primary Instance is unavailable. The read replica can also be used to restart a failed primary warmed up.
- Data protection scenarios; in the unlikely event or primary node failure or that the Availability Zone in which your primary node resides becomes unavailable, you can promote a read replica in a different Availability Zone to become the new primary.
Q: How do I deploy a read replica node for a given primary cache node?
You can create a read replica in minutes using a CreateReplicationGroup API or a few clicks of the Amazon ElastiCache Management Console. When creating a replication group, you specify the MasterCacheClusterIdentifier. The MasterCacheClusterIdentifier is the cache cluster Identifier of the “primary” cache cluster from which you wish to replicate. You then create the read replica cluster within the replication group by calling the CreateCacheCluster API specifying the ReplicationGroupIdentifier and the CacheClusterIdentifier of the master cluster. As with a standard cache cluster, you can also specify the Availability Zone. When you initiate the creation of a read replica, Amazon ElastiCache takes a snapshot of your primary cache cluster and begins replication. As a result, you will experience a brief I/O suspension on your primary cache cluster as the snapshot occurs. The I/O suspension typically lasts on the order of one minute.
The read replicas are as easy to delete as they are to create; simply use the Amazon ElastiCache Management Console or call the DeleteCacheCluster API (specifying the CacheClusterIdentifier for the read replica you wish to delete).
Q: Can I create both a primary and read replicas at the same time?
Yes. You can create a new cache cluster along with read replicas in minutes using the CreateReplicationGroup API or using the “Launch Cache Cluster” wizard at the Amazon ElastiCache Management Console and selecting “Multi-AZ Replication”. When creating the replication group, specify an identifier for the replication group, the total number of desired clusters in the replication group, along with cache creation parameters such as cache node type, cache engine version, etc. You can also specify the Availability Zone for each cluster in the replication group.
Q: How do I connect to my read replica(s)?
You can connect to a read replica just as you would connect to a primary cache node, using the DescribeCacheClusters API or AWS Management Console to retrieve the endpoint(s) for you read replica(s). If you have multiple read replicas, it is up to your application to determine how read traffic will be distributed amongst them.
Q: How many read replicas can I create for a given primary cache node?
At this time, Amazon ElastiCache allows you to create up to five (5) read replicas for a given primary cache node.
Q: What happens to read replicas if failover occurs?
In the event of a failover, any associated and available read replicas should automatically resume replication once failover has completed (acquiring updates from the newly promoted read replica).
Q: Can I create a read replica of another read replica?
Creating a read replica of another read replica is not supported.
Q: Can I promote my read replica into a “standalone” primary cache node?
No, this is not supported. Instead, you may snapshot your ElastiCache for Redis node (you may select the primary or any of the read-replicas). You can then use the snapshot to seed a new ElastiCache for Redis primary. Additionally, you may attach a Redis slave, running in your EC2 instance, to your ElastiCache for Redis primary node and take an RDB snapshot. You may then create an ElastiCache for Redis primary from that snapshot. Refer to the Amazon ElastiCache User Guide for more details.
Q: Will my read replica be kept up-to-date with its primary cache node?
Updates to a primary cache node will automatically be replicated to any associated read replicas. However, with Redis’s asynchronous replication technology, a read replica can fall behind its primary cache node for a variety of reasons. Typical reasons include:
- Write I/O volume to the primary cache node exceeds the rate at which changes can be applied to the read replica
- Network partitions or latency between the primary cache node and a read replica
Read replicas are subject to the strengths and weaknesses of Redis replication. If you are using read replicas, you should be aware of the potential for lag between a read replica and its primary cache node, or “inconsistency”. Click here for guidance on how to find out the “inconsistency” of your read replica.
Q: How do I gain visibility into active read replica(s)?
You can use the standard DescribeCacheClusters API to return a list of all the cache clusters you have deployed (including read replicas), or simply click on the "Cache Clusters" tab of the Amazon ElastiCache Management Console.
Amazon ElastiCache monitors the replication status of your read replicas and updates the Replication State field to Error if replication stops for any reason. You can review the details of the associated error thrown by the Redis engine by viewing the Replication Error field and take an appropriate action to recover from it. You can learn more about troubleshooting replication issues in the Troubleshooting a Read Replica problem section of the Amazon ElastiCache User Guide. If a replication error is fixed, the Replication State changes to Replicating.
Amazon ElastiCache allows you to gain visibility into how far a read replica has fallen behind its primary through the Amazon CloudWatch metric ("Replica Lag") available via the AWS Management Console or Amazon CloudWatch APIs.
Q: My read replica has fallen significantly behind its primary cache node. What should I do?
As discussed in the previous questions, “inconsistency” or lag between a read replica and its primary cache node is common with Redis asynchronous replication. If an existing read replica has fallen too far behind to meet your requirements, you can reboot it. Keep in mind that replica lag may naturally grow and shrink over time, depending on your primary cache node’s steady-state usage pattern.
Q: How do I delete a read replica? Will it be deleted automatically if its primary cache node is deleted?
You can easily delete a read replica with a few clicks of the AWS Management Console or by passing its cache cluster identifier to the DeleteCacheCluster API. If you want to delete the read replica in addition to the primary cache node, you must use the DeleteReplicationGroup API or AWS Management Console.
Q: How much do read replicas cost? When does billing begin and end?
A read replica is billed as a standard cache node and at the same rates. Just like a standard cache node, the rate per “Cache Node hour” for a read replica is determined by the cache node class of the read replica – please see Amazon ElastiCache detail page for up-to-date pricing. You are not charged for the data transfer incurred in replicating data between your primary cache node and read replica. Billing for a read replica begins as soon as the read replica has been successfully created (i.e. when status is listed as “active”). The read replica will continue being billed at standard Amazon ElastiCache cache node hour rates until you issue a command to delete it.
Q: What happens during failover and how long does it take?
Initiated failover is supported by Amazon ElastiCache so that you can resume cache operations as quickly as possible. When failing over, Amazon ElastiCache simply flips the DNS record for your cache node to point at the read replica, which is in turn promoted to become the new primary. We encourage you to follow best practices and implement cache node connection retry at the application layer. Start-to-finish, failover typically completes within three to six minutes.
Q: Can I create a read replica in another region as my primary?
No. Your read replica may only be provisioned in the same or different Availability Zone of the same Region as your cache node primary.
Q: Can I see which Availability Zone my primary is currently located in?
Yes, you can gain visibility into the location of the current primary by using the AWS Management Console or DescribeCacheClusters API.
After failover, my primary is now located in a different Availability Zone than my other AWS resources (e.g. EC2 instances).
Q: Should I be concerned about latency?
Availability Zones are engineered to provide low latency network connectivity to other Availability Zones in the same Region. In addition, you may want to consider architecting your application and other AWS resources with redundancy across multiple Availability Zones so your application will be resilient in the event of an Availability Zone failure.
An ElastiCache for Redis replication group consists of a primary and up to five read replicas. Redis asynchronously replicates the data from the primary to the read replicas. During certain types of planned maintenance, or in the unlikely event of ElastiCache node failure or Availability Zone failure, Amazon ElastiCache will automatically detect the failure of a primary, select a read replica, and promote it to become the new primary. ElastiCache also propagates the DNS changes of the promoted read replica, so if your application is writing to the primary node endpoint, no endpoint change will be needed.
Q: What are the benefits of using Multi-AZ?
The main benefits of running your ElastiCache for Redis in Multi-AZ mode are enhanced availability and smaller need for administration. If an ElastiCache for Redis primary node failure occurs, the impact on your ability to read/write to the primary is limited to the time it takes for automatic failover to complete. When Multi-AZ is enabled, ElastiCache node failover is automatic and requires no administration. You no longer need to monitor your Redis nodes and manually initiate a recovery in the event of a primary node disruption.
Q: How does Multi-AZ work?
You can use Multi-AZ if you are using ElastiCache for Redis and have a replication group consisting of a primary node and one or more read replicas. If the primary node fails, ElastiCache will automatically detect the failure, select one from the available read replicas, and promote it to become the new primary. ElastiCache will propagate the DNS changes of the promoted replica so that your application can keep writing to the primary endpoint. ElastiCache will also spin up a new node to replace the promoted read replica in the same Availability Zone of the failed primary. In case the primary failed due to temporary Availability Zone disruption, the new replica will be launched once that Availability Zone has recovered.
Q: Can I have replicas in the same Availability Zone as the primary?
Yes. Note that placing both the primary and the replica(s) in the same Availability Zone will not make your ElastiCache for Redis replication group resilient to an Availability Zone disruption.
Q: What events would cause Amazon ElastiCache to fail over to a read replica?
Amazon ElastiCache will failover to a read replica in the event of any of the following:
- Loss of availability in primary’s Availability Zone
- Loss of network connectivity to primary
- Compute unit failure on primary
Q: When should I use Multi-AZ?
Using Redis replication in conjunction with Multi-AZ provides increased availability and fault tolerance. Such deployments are a natural fit for use in production environments.
Q: How do I create an ElastiCache for Redis replication group with Multi-AZ enabled?
You can create an ElastiCache for Redis primary and read replicas by clicking Launch Cache Cluster on the ElastiCache Management Console. You can also do so by calling the CreateReplicationGroup API. For existing replication groups (Redis 2.8.6 and 2.8.19), you can enable Multi-AZ by choosing a replication group and clicking Modify on the ElastiCache Management Console or by using the ModifyReplicationGroup API. Switching a replication group to Multi-AZ is not disruptive to your Redis data and does not interfere your nodes' ability to serve requests.
Q: Which read replica will be promoted in case of primary node failure?
If there are more than one read replicas, the read replica with the smallest asynchronous replication lag to the primary will be promoted.
Q: How much does it cost to use Multi-AZ?
Multi-AZ is free of charge. You only pay for the ElastiCache nodes that you use.
Q: What are the performance implications of Multi-AZ?
ElastiCache currently uses the Redis engine’s native, asynchronous replication and is subject to its strengths and limitations. In particular, when a read replica connects to a primary for the first time, or if the primary changes, the read replica does a full synchronization of the data from the primary, imposing load on itself and the primary. For additional details regarding Redis replication please see here.
Q: What cache node types support Multi-AZ?
All available cache node types in ElastiCache support Multi-AZ except T1 and T2 families.
Q: Will I be alerted when automatic failover occurs?
Yes, Amazon ElastiCache will create an event to inform you that automatic failover occurred. You can use the DescribeEvents API to return information about events related to your ElastiCache node, or click the Events section of the ElastiCache Management Console.
Q: After failover, my primary is now located in a different Availability Zone than my other AWS resources (for example, EC2 instances). Should I be concerned about latency?
Availability Zones are engineered to provide low latency network connectivity to other Availability Zones in the same region. You may consider architecting your application and other AWS resources with redundancy across multiple Availability Zones so your application will be resilient in the event of an Availability Zone disruption.
Q: Where can I get more information about Multi-AZ?
For more information about Multi-AZ, see ElastiCache documentation.
Backup and Restore is a feature that allows customers to create snapshots of their ElastiCache for Redis clusters. ElastiCache stores the snapshots, allowing users to subsequently use them to restore Redis clusters.
Q: What is a snapshot?
A snapshot is a copy of your entire Redis cluster at a specific moment.
Q: Why would I need snapshots?
Creating snapshots can be useful in case of data loss caused by node failure, as well as the unlikely event of a hardware failure. Another common reason to use backups is for archiving purposes. Snapshots are stored in Amazon S3, which is a durable storage, meaning that even a power failure won’t erase your data.
Q: What can I do with a snapshot?
You can use snapshots to warm start an ElastiCache for Redis cluster with preloaded data.
Q: How does Backup and Restore work?
When a backup is initiated, ElastiCache will take a snapshot of a specified Redis cluster that can later be used for recovery or archiving. You can initiate a backup anytime you choose or set a recurring daily backup with retention period of up to 35 days.
When you choose a snapshot to restore, a new ElastiCache for Redis cluster will be created and populated with the snapshot’s data. This way you can create multiple ElastiCache for Redis clusters from a specified snapshot.
Currently, ElastiCache uses Redis’ native mechanism to create and store an RDB file as the snapshot.
Q: Where are my snapshots stored?
The snapshots are stored in S3.
Q: How can I get started using Backup and Restore?
You can select to use the Backup and Restore feature through the AWS Management Console, through the ElastiCache APIs (CreateCacheCluster, ModifyCacheCluster and ModifyReplicationGroup API’s) and CLI. You can deactivate and reactivate the feature anytime you choose.
Q: How do I specify which Redis cluster and node to backup?
Backup and Restore creates snapshots on a cluster basis. Users can specify which ElastiCache for Redis cluster to backup through the AWS Management Console, CLI or through the CreateSnapshot API. In a Replication Group, you can choose to backup the primary or any of the read-replica clusters. We recommend users enable backup on one of the read-replicas, mitigating any latency effect on the Redis primary.
Q: Does ElastiCache for Memcached support Backup and Restore?
No, snapshots are available only for ElastiCache for Redis.
Q: How can I specify when a backup will take place?
Through the AWS Management Console, CLI or APIs you can specify when to start a single backup or a recurring backup. Users are able to:
- Take a snapshot right now (through “Create Snapshot” console button or CreateSnapshot API)
- Set up an automatic daily backup. The backup will take place during your preferred backup window. You can set that up through Creating/Modifying cluster via console or the CreateCacheCluster, ModifyCacheCluster or ModifyReplicationGroup API’s.
Q: What is a backup window and why do I need it?
The preferred backup window is the user-defined period of time during which your ElastiCache for Redis cluster backup will start. This is helpful if you want to backup at a certain time of day or to refrain from backups during a particularly high-utilization period.
Q: What is the performance impact of taking a snapshot?
While taking a snapshot, you may encounter increased latencies for a brief period at the node. Snapshots use Redis’s built-in BGSAVE and are subject to its strengths and limitations. In particular, the Redis process forks and the parent continues to serve requests while the child saves the data on disk and then exits. The forking increases the memory usage for the duration of the snapshot generation. When this memory usage exceeds that of the available memory of the cache node, swapping can get triggered, further slowing down the node. For this reason, we recommend generating snapshots on one of the read replicas (instead of the primary). Also, we suggest setting the reserved-memory parameter to minimize swap usage. See here for more details.
Q: Can I create a snapshot from an ElastiCache for Redis read replica?
Yes. Creating a snapshot from a read replica is the best way to backup your data while minimizing performance impact.
Q: In what regions is the Backup and Restore feature available?
Backup and Restore feature is available in all regions where ElastiCache service is available.
Q: Can I copy snapshots from one region to another?
Not at this point.
Q: How much does it cost to use Backup and Restore?
Amazon ElastiCache provides storage space for one snapshot free of charge for each active ElastiCache for Redis cluster. Additional storage will be charged based on the space used by the snapshots with $0.085/GB every month (same price in all regions). Data transfer for using the snapshots is free of charge.
Q: What is the retention period?
Retention period is the time span during which the automatic snapshots are retained. For example, if a retention period is set for 5, a snapshot that was taken today will be retained for 5 days before being deleted. You can choose to copy one or more automatic snapshots to store them as manual so that they won’t be deleted after the retention period is over.
Q: How do I manage the retention of my automated snapshots?
You can use the AWS Management Console or ModifyCluster API to manage the period of time your automated backups are retained by modifying the RetentionPeriod parameter. If you desire to turn off automated backups altogether, you can do so by setting the retention period to 0 (not recommended).
Q: What happens to my snapshots if I delete my ElastiCache for Redis cluster?
When you delete an ElastiCache for Redis cluster, your manual snapshots are retained. You will also have an option to create a final snapshot before the cluster is deleted. Automatic cache snapshots are not retained.
Q: What cache nodes types support backup and restore capability?
All ElastiCache for Redis instance node types besides t1.micro and t2 family support backup and restore:
Current Generation Cache Nodes:
Previous Generation Cache Nodes:
Q: Can I use my own RDB snapshots stored in S3 to warm start an ElastiCache for Redis cluster?
Yes. You can specify the S3 location of your RDB file during cluster creation through the Launch Cache Cluster Wizard in the console or through the CreateCacheCluster API.
Q: Can I use the Backup and Restore feature if I am running ElastiCache in a VPC?
Q: I have multiple AWS accounts using ElastiCache for Redis. Can I use ElastiCache snapshots from one account to warm start an ElastiCache for Redis cluster in a different one?
Not at this point.