Overview
In-memory data caching can be one of the most effective strategies to improve your overall application performance and to reduce your database costs.
Caching can be applied to any type of database including relational databases such as Amazon RDS or NoSQL databases such as Amazon DynamoDB, MongoDB and Apache Cassandra. The best part of caching is that it’s minimally invasive to implement and by doing so, your application performance regarding both scale and speed is dramatically improved.
Below you will find some of the caching strategies and implementation approaches that can be taken to address the limitations and challenges associated with disk-based databases.
Further Reading: Technical Whitepaper on Database Caching Strategies Using Redis
Database Challenges
When building distributed applications that require low latency and scalability, there are a number of challenges that disk-based databases can pose to your applications. A few common ones include:
- Slow processing queries: While there are a number of query optimization techniques and schema designs that can help boost query performance, the data retrieval speed from disk plus the added query processing times generally will put your query response times in double-digit millisecond speeds, at best. This assumes you have a steady load and your database is performing optimally.
- Cost to scale: Whether the data is distributed in a disk-based NoSQL database or vertically scaled up in a relational database, scaling for extremely high reads can be costly and may require a number of database read-replicas to match what a single in-memory cache node can deliver in terms of requests per second.
- The need to simplify data access: While relational databases provide excellent means to data model relationships, they aren’t optimal for data access. There are instances where your applications may want to access the data in a particular structure or view to simplify data retrieval and increase application performance.
Before implementing database caching, many architects and engineers spend great effort in squeezing as much performance as they can out of their database. And while doing so with reasonable expectations is great, it’s counterproductive to try and solve a problem with the wrong tools. For example, say you are trying to lower the latency of your database query, doing this with reasonable expectations is a best practice, but trying to defy the laws of physics associated with retrieving data from disk is a waste of time.
How Caching Helps
A database cache supplements your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data. The cache itself can live in a number of areas including your database, application or as a standalone layer.
The three most common types of database caches are the following:
- Database Integrated Caches: Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built-in write-through capabilities. When the underlying data changes on the database table, the database updates its cache automatically, which is great. There is nothing within the application tier required to leverage this cache. Where integrated caches fall short is in their size and capabilities. Integrated caches are typically limited to the available memory allocated to the cache by the database instance and cannot be leveraged for other purposes, such as sharing data with other instances.
- Local Caches: A local cache stores your frequently used data within your application. This not only speeds up your data retrieval but also removes network traffic associated with retrieving data, making data retrieval faster than other caching architectures. A major disadvantage is that among your applications, each node has its own resident cache working in a disconnected manner. The information stored within an individual cache node, whether its database cached data, web sessions or user shopping carts cannot be shared with other local caches. This creates challenges in a distributed environment where information sharing is critical to support scalable dynamic environments. And since most applications utilize multiple app servers, if each server has its own cache, coordinating the values across these becomes a major challenge.
In addition, when outages occur, the data in the local cache is lost and will need to be rehydrated effectively negating the cache. The majority of these cons are mitigated with remote caches. A remote cache (or “side cache”) is a separate instance (or multiple instances) dedicated for storing the cached data in-memory.
When network latency is of concern, a two-tier caching strategy can be applied that leverages a local and remote cache together. We won’t discuss this strategy in detail, but it is used typically used only when absolutely needed as it adds complexity. For most applications, the added network overhead associated with a remote cache is of little concern given that a request to it is generally fulfilled in sub-millisecond performance.
- Remote caches: Remote caches are stored on dedicated servers and typically built upon key/value NoSQL stores such as Redis and Memcached. They provide hundreds of thousands to up-to a million requests per second per cache node. Many solutions such as Amazon ElastiCache for Redis also provide the high availability needed for critical workloads.
Also, the average latency of a request to a remote cache is fulfilled in sub-millisecond latency, orders of magnitude faster than a disk-based database. At these speeds, local caches are seldom necessary. And since the remote cache works as a connected cluster that can be leveraged by all your disparate systems, they are ideal for distributed environments.
With remote caches, the orchestration between caching the data and managing the validity of the data is managed by your applications and/or processes leveraging it. The cache itself is not directly connected to the database but used adjacently to it. We’ll focus our attention on leveraging remote caches and specifically Amazon ElastiCache for Redis for caching relational database data.
To learn more about Caching Patterns, please visit the Implementation Considerations Page.