Within your applications, there may be arbitrary areas where you may need to store and retrieve data not suitable for a traditional database in order to efficiently and rapidly respond to a request or compute the response. For example, say your application receives some user input and needs to check whether a similar request was made within the hour before responding back. One way to approach this would be to cache your users requests for an hour and code your application to check that cache when a new user request is made. You may also have other types of data that’s cached that could help compute a value used in the response. This type of cached information is transient and requires extremely low latency.
While it’s possible to employ an application cache within the application node itself, it would not be able to withstand failure. Also, local caches are isolated on the individual nodes and cannot be leveraged within a cluster of application servers. Distributed caches on the other hand, provide low latency and higher levels of availability, when employed with read replicas and can provide a shared environment for all your application servers to utilize the cached data. Other use cases for a distributed cache may include sharing information across various applications within your system. When you have a centralized location for your data, it can effectively power various applications with the same data and low latency speeds.
Today most key/value stores such as Memcached and Redis can store terabytes worth of data. Redis also provides high availability and persistence features that makes storing data in it an excellent choice when disk based durability is not important. To learn more, visit Amazon ElastiCache for Redis.