Overview
In-memory data caching can be one of the most effective strategies to improve your overall application performance and to reduce your database costs.
Caching can be applied to any type of database including relational databases such as Amazon RDS or NoSQL databases such as Amazon DynamoDB, MongoDB and Apache Cassandra. The best part of caching is that it’s minimally invasive to implement and by doing so, your application performance regarding both scale and speed is dramatically improved.
Below you will find some of the caching strategies and implementation approaches that can be taken to address the limitations and challenges associated with disk-based databases.
Further Reading: Technical Whitepaper on Database Caching Strategies Using Redis
Database Challenges
When building distributed applications that require low latency and scalability, there are a number of challenges that disk-based databases can pose to your applications. A few common ones include:
- Slow processing queries: While there are a number of query optimization techniques and schema designs that can help boost query performance, the data retrieval speed from disk plus the added query processing times generally will put your query response times in double-digit millisecond speeds, at best. This assumes you have a steady load and your database is performing optimally.
- Cost to scale: Whether the data is distributed in a disk-based NoSQL database or vertically scaled up in a relational database, scaling for extremely high reads can be costly and may require a number of database read-replicas to match what a single in-memory cache node can deliver in terms of requests per second.
- The need to simplify data access: While relational databases provide excellent means to data model relationships, they aren’t optimal for data access. There are instances where your applications may want to access the data in a particular structure or view to simplify data retrieval and increase application performance.
Before implementing database caching, many architects and engineers spend great effort in squeezing as much performance as they can out of their database. And while doing so with reasonable expectations is great, it’s counterproductive to try and solve a problem with the wrong tools. For example, say you are trying to lower the latency of your database query, doing this with reasonable expectations is a best practice, but trying to defy the laws of physics associated with retrieving data from disk is a waste of time.
How Caching Helps
A database cache supplements your primary database by removing unnecessary pressure on it, typically in the form of frequently accessed read data. The cache itself can live in a number of areas including your database, application or as a standalone layer.
The three most common types of database caches are the following:
- Database Integrated Caches: Some databases such as Amazon Aurora offer an integrated cache that is managed within the database engine and has built-in write-through capabilities. When the underlying data changes on the database table, the database updates its cache automatically, which is great. There is nothing within the application tier required to leverage this cache. Where integrated caches fall short is in their size and capabilities. Integrated caches are typically limited to the available memory allocated to the cache by the database instance and cannot be leveraged for other purposes, such as sharing data with other instances.
- Local Caches: A local cache stores your frequently used data within your application. This not only speeds up your data retrieval but also removes network traffic associated with retrieving data, making data retrieval faster than other caching architectures. A major disadvantage is that among your applications, each node has its own resident cache working in a disconnected manner. The information stored within an individual cache node, whether its database cached data, web sessions or user shopping carts cannot be shared with other local caches. This creates challenges in a distributed environment where information sharing is critical to support scalable dynamic environments. And since most applications utilize multiple app servers, if each server has its own cache, coordinating the values across these becomes a major challenge.
In addition, when outages occur, the data in the local cache is lost and will need to be rehydrated effectively negating the cache. The majority of these cons are mitigated with remote caches. A remote cache (or “side cache”) is a separate instance (or multiple instances) dedicated for storing the cached data in-memory.
When network latency is of concern, a two-tier caching strategy can be applied that leverages a local and remote cache together. We won’t discuss this strategy in detail, but it is used typically used only when absolutely needed as it adds complexity. For most applications, the added network overhead associated with a remote cache is of little concern given that a request to it is generally fulfilled in sub-millisecond performance.
- Remote caches: Remote caches are stored on dedicated servers and typically built upon key/value NoSQL stores such as Redis and Memcached. They provide hundreds of thousands to up-to a million requests per second per cache node. Many solutions such as Amazon ElastiCache for Redis also provide the high availability needed for critical workloads.
Also, the average latency of a request to a remote cache is fulfilled in sub-millisecond latency, orders of magnitude faster than a disk-based database. At these speeds, local caches are seldom necessary. And since the remote cache works as a connected cluster that can be leveraged by all your disparate systems, they are ideal for distributed environments.
With remote caches, the orchestration between caching the data and managing the validity of the data is managed by your applications and/or processes leveraging it. The cache itself is not directly connected to the database but used adjacently to it. We’ll focus our attention on leveraging remote caches and specifically Amazon ElastiCache for Redis for caching relational database data.
To learn more about Caching Patterns, please visit the Implementation Considerations Page.
Relational Database Caching techniques
Many of the techniques we’ll review can be applied to any type of database. However, we’ll focus on relational databases as it’s the most common database caching use case.
The basic paradigm when querying data from a relational database from an application includes executing SQL statements and iterating over the returned ResultSet object cursor in order to retrieve the database rows. When wanting to cache the returned data, there are a few techniques you can apply, but its best to choose a method that simplifies your data access pattern and/or optimizes your applications architecture goals.
To visualize this further, we’ll examine snippets of sample code for the purposes of explaining the database caching logic. We’ll also use the Jedis Redis client library for connecting to Redis although any Java Redis library including Lettuce and Redisson would work. It’s also worth noting that some application frameworks may encapsulate some of the database caching logic techniques below. With that said, it’s still important to understand the implementation details especially when not utilizing those higher level abstractions.
Let's assume we’ve issued a query against a customer database for a particular record and from there we’ll examine caching strategies that could be leveraged. Assume the following SQL query returns a record:
SELECT FIRST_NAME, LAST_NAME, EMAIL, CITY, STATE, ADDRESS, COUNTRY FROM CUSTOMERS WHERE CUSTOMER_ID = “1001”;
. . . Statement stmt = connection.createStatement(); ResultSet rs = stmt.executeQuery(query); while (rs.next()) { Customer customer = new Customer(); customer.setFirstName(rs.getString("FIRST_NAME")); customer.setLastName(rs.getString("LAST_NAME")); and so on … } . . . |
Iterating over the ResultSet cursor allows you to retrieve the fields and values from the database rows and from that point, the application can choose where and how to leverage that data. Because this isn’t an application design discussion we defocus on the code and rather put emphasis the caching logic.
Let’s also assume you are not using an application framework that can be used to abstract your caching implementation. With all that stated, the question then is, how best to cache the returned database data?
Given the above scenario, you have many options, let’s evaluate a few.
Cache the database SQL ResultSet
Cache a serialized ResultSet object containing the fetched database row.
- Pro: When data retrieval logic is abstracted (e.g. such as in a Data Access Object, or “DAO” layer) the consuming code only expects a ResultSet object and does not need to be made aware of its origination. Whether the ResultSet originated from the database or was deserialized from the cache, the result is the same, a ResultSet, and is ready to be iterated over, greatly reducing integration logic. This pattern can also be applied to any relational database.
- Con: Data retrieval still requires extracting values from the ResultSet object cursor and does not further simplify data access; it only reduces data retrieval latency.
Note, when caching the row, it’s important that it’s serializable. The below example is utilizing a CachedRowSet implementation for this purpose. Also when using Redis, this is stored as a Byte array value.
. . . // rs contains the ResultSet if (rs != null) { //lets write-through to the cache CachedRowSet cachedRowSet = new CachedRowSetImpl(); cachedRowSet.populate(rs, 1); ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = new ObjectOutputStream(bos); out.writeObject(cachedRowSet); byte[] redisRSValue = bos.toByteArray(); jedis.set(key.getBytes(), redisRSValue); jedis.expire(key.getBytes(), ttl); } . . . |
The above snippet is converting the CachedRowSet object into in ByteArray, then storing that Byte array as a Redis Byte Array value. The value of the key used above, is the actual SQL statement converted into Bytes. The nice thing about storing the SQL statement as the key for this technique is that it helps enable for a transparent caching abstraction layer that hides the implementation details. The other added benefit is you don’t need to create any additional mappings between a custom key id and the executed SQL statement. The last statement executes an expire command in order to apply a TTL to the stored key. This code follows our write-through logic in that upon querying the database, the cached value is stored immediately afterward.
For lazy population, you would initially query the cache prior to executing the query against the database. A good tip to hiding the implementation details would be to utilize the DAO pattern and expose a generic method for your application to retrieve the data. For example, since your key is the actual SQL statement, your method signature could look like the following:
public ResultSet getResultSet(String key); // key is sql statement |
The code calling (consuming) this method only expects a ResultSet object regardless of what the underlying implementation details are for the interface. Under the hood, the getResultSet method would execute a GET for the SQL key and if present, de-serialize and convert into a ResultSet.
public ResultSet getResultSet(String key) { byte[] redisResultSet = null; redisResultSet = jedis.get(key.getBytes()); ResultSet rs = null; if (redisResultSet != null) { // if cached value exists, de-serialize it and return it try { cachedRowSet = new CachedRowSetImpl(); ByteArrayInputStream bis = new ByteArrayInputStream(redisResultSet); ObjectInput in = new ObjectInputStream(bis); cachedRowSet.populate((CachedRowSet) in.readObject()); rs = cachedRowSet; }... } else { // get the ResultSet from the database, store it in the rs object, then cache it. ... } ... return rs; } |
If the data was not present in the cache, you would query the database for it, then cache it before returning. As mentioned earlier, a best practice would be to apply an appropriate TTL on the KEYs as well.
For all other caching techniques below, you will want to establish a naming convention for your Redis Keys. A good naming convention is one that is easily predictable to applications and developers. A hierarchal structure separated by colons is a common naming convention for KEYs, such as object:type:id
Cache select fields and values in a custom format
Cache a subset of a fetched database row into a custom structure that can be consumed by your applications
- Pro: This approach is very easy to implement. You essentially store specific retrieved fields and values into a structure such as JSON or XML then SET that structure into a Redis String. The format you choose should be something that conforms to your applications data access pattern.
- Con: Your application is leveraging different types of objects when querying for particular data (i.e. Redis String and database results when needed). In addition, you would be required to parse through the entire structure to retrieve individual attributes associated with it.
. . . // rs contains the ResultSet while (rs.next()) { Customer customer = new Customer(); Gson gson = new Gson(); JsonObject customerJSON = new JsonObject(); customer.setFirstName(rs.getString("FIRST_NAME")); customerJSON.add(“first_name”, gson.toJsonTree(customer.getFirstName() ); customer.setLastName(rs.getString("LAST_NAME")); customerJSON.add(“last_name”, gson.toJsonTree(customer.getLastName() ); and so on … jedis.set(customer:id:"+customer.getCustomerID(), customerJSON.toString() ); } . . . |
The above snippet stores specific customer attributes into a customer JSON object and caches that JSON object into a Redis string.
For data retrieval, you could implement a generic method that accepts a customer key (e.g. customer:id:1001) and a SQL statement string argument. It would also return whatever structure (JSON, XML, etc.) your application requires, abstracting the underlying details, similar to what I described earlier.
Upon initial request, the application would execute a GET on the customer key and if value is present, return it and complete the call. If it’s not present, query the database for the record, write-through a JSON representation of the data to the cache and return.
Cache select fields and values into an aggregate Redis data structure
Cache the fetched database row into a specific data structure that can simplify the applications data access
- Pro: When converting the ResultSet into a format that simplifies access, such as a Redis Hash, your application is able to leverage that data more effectively. This technique simplifies your data access pattern by reducing the need to iterate over a ResultSet or parse a custom structure like a JSON object stored in a String. In addition, working with aggregate data structures such as Lists, Sets and Hashes in Redis can be SET/GET in their native application data structure form.
- Con: Your application is leveraging different types of objects when querying for particular data (i.e. Redis Hash and database results when needed)
. . . // rs contains the ResultSet while (rs.next()) { Customer customer = new Customer(); Map<String, String> map = new HashMap<String, String>(); customer.setFirstName(rs.getString("FIRST_NAME")); map.put("firstName", customer.getFirstName()); customer.setLastName(rs.getString("LAST_NAME")); map.put("lastName", customer.getLastName()); and so on … jedis.hmset(customer:id:"+customer.getCustomerID(), map); } . . . |
The above code creates a HashMap object that is used to store the customer data. The map is populated with the database data and SET into a Redis Hash.
For data retrieval, you could implement a generic method that accepts a customer id (the key) and a SQL statement argument. It would also return a HashMap to the caller. The implementation, just as the other examples, could be written to hide the details of where the map is originating from. First your application would query the cache for the customer data using the customer id key. If the data is not present, the SQL statement would execute and retrieve the data from the database. Upon retrieval, you may also store a HASH representation of that customer to lazy load.
Unlike JSON, the added benefit of storing your data as a HASH in Redis is you can query for individual attributes within it. Say for a given request, you did not want to all the data stored within a particular customer map but just wanted the first name, this is supported in Redis, among various other features such as adding/deleting individual attributes in a map.
Cache serialized application object entities
Cache a subset of a fetched database row into a custom structure that can be consumed by your applications
- Pro: Utilize application objects in their native application state with simple serializing/de-serializing techniques. This can rapidly accelerate application performance by minimizing data transformation logic.
- Con: Advanced application development use case.
. . . Customer customer = (Customer) object; ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutput out = null; try { out = new ObjectOutputStream(bos); out.writeObject(customer); out.flush(); byte[] objectValue = bos.toByteArray(); jedis.set(key.getBytes(), objectValue); jedis.expire(key.getBytes(), ttl); } . . . |
The above code converts the customer object into a Byte Array then stores that value into Redis. The key would be the customer identifier would be the byte representation of the key (e.g. customer:id:1001).
As with the other examples, when instantiating an object or hydrating one with state, you could create a generic method that accepts the customer id (the key) and a SQL statement argument. It will also return a customer object. First your application would query the cache for the serialized customer object using the customer id. If the data is not present, the SQL statement would execute and the application will consume the data, hydrate the customer entity object then lazy load the serialized representation of it in the cache.
public Customer getObject(String key) { } ... |
Amazon ElastiCache Compared to Self-Managed Redis
Redis is an open source, in-memory data store that has become the most popular key/value engine in the market. Much of its popularity is due to its support for a variety of data structures as well as other features including Lua scripting support and Pub/Sub messaging capability. Other added benefits include high availability topologies with support for read replicas and the ability to persist data.
Amazon ElastiCache offers a fully managed service for Redis. What this means is all the administrative tasks associated with managing your Redis cluster including monitoring, patching, backups, and automatic failover are managed by Amazon. This allows you the ability to focus on your business and your data rather than being focused on operations.
Other benefits of using Amazon ElastiCache for Redis over self-managing your cache environment include:
- An enhanced Redis engine that is fully compatible with the open source version but also provides added stability and robustness.
- Easy modifiable parameters such as eviction policies, buffer limits, etc.
- Ability to scale and resize your cluster to terabytes of data
- Hardened security enabling you to isolate your cluster within Amazon VPC
Please see the resources below for more information:
For more information on Amazon ElastiCache for Redis, click here.
For full Redis command list, click here.
Get started with Amazon ElastiCache
It's easy to get started with caching in the cloud with a fully-managed service like Amazon ElastiCache. It removes the complexity of setting up, managing and administering your cache, and frees you up to focus on what brings value to your organization. Sign up today for Amazon ElastiCache.