AWS Database Blog

Amazon DynamoDB Accelerator (DAX): A Read-Through/Write-Through Cache for DynamoDB

Joseph Idziorek is a product manager at Amazon Web Services.

AWS recently launched Amazon DynamoDB Accelerator (DAX), a highly available, in-memory cache for Amazon DynamoDB. If you’re currently using DynamoDB or considering DynamoDB, DAX can offer you response times in microseconds and millions of requests per second.

When developers start using DAX, they tell us that the performance is great, and they love that DAX is API compatible with DynamoDB. This means they no longer have to set up and develop against a side cache. Instead of learning another database system with a new set of APIs and data types—and then rewriting applications to do the two-step dance needed for cache look-ups, population, and invalidation—you can simply point your existing DynamoDB application at the DAX endpoint. What used to take weeks and months now takes only moments with DAX.

How does DAX accomplish this? When you’re developing against DAX, instead of pointing your application at the DynamoDB endpoint, you point it at the DAX endpoint, and DAX handles the rest. As a read-through/write-through cache, DAX seamlessly intercepts the API calls that an application normally makes to DynamoDB so that both read and write activity are reflected in the DAX cache. For you, the API calls are the same, so there’s no need to rewrite the application.

In this post, we take a step back and describe what a read-through/write-through cache is, and how it’s different from other caching strategies. We also discuss the design considerations for these different caching strategies.

Side-cache
When you’re using a cache for a backend data store, a side-cache is perhaps the most commonly known approach. Canonical examples include both Redis and Memcached. These are general-purpose caches that are decoupled from the underlying data store and can help with both read and write throughput, depending on the workload and durability requirements.

For read-heavy workloads, side-caches are typically used as follows:

  1. For a given key-value pair, the application first tries to read the data from the cache. If the cache is populated with the data (cache hit), the value is returned. If not, on to step 2.
  2. Because the desired key-value pair was not found in the cache, the application then fetches the data from the underlying data store.
  3. To ensure that the data is present when the application needs to fetch the data again, the key-value pair from step 2 is then written to the cache.

Although the number of use cases for caching is growing, especially for Redis, the predominant workload is to support read-heavy workloads for repeat reads. Additionally, developers also use Redis to better absorb spikes in writes. One of the more popular patterns is to write directly to Redis and then asynchronously invoke a separate workflow to de-stage the data to a separate data store (for example, DynamoDB).

There are a few design points to note here. First, writes to cache are both eventually consistent and non-durable, so there is a possibility for data loss. Some applications in IoT, for example, can tolerate this trade-off. In addition, there are penalties in the form of multiple round-trips and additional connection handling.

Read-through cache
Different from a side-cache, in which you must write application logic to fetch and populate items in the cache, a read-through cache sits in-line with the database and fetches items from the underlying data store when there is a cache miss and returns items direct from the cache for a cache hit. DAX is a read-through cache because it is API compatible with DynamoDB read APIs and caches GetItem, BatchGetItem, Scan, and Query results if they don’t currently reside in DAX. A read-through cache is effective for read-heavy workloads.

The following steps outline the process for a read-through cache:

  1. Given a key-value pair, the application first tries to read the data from DAX. If the cache is populated with the data (cache hit), the value is returned. If not, on to step 2.
  2. Transparent to the application, if there was a cache miss, DAX fetches the key-value pair from DynamoDB.
  3. To make the data available for any subsequent reads, the key-value pair is then populated in the DAX cache.
  4. The key-value pair then returns the value back to the application.

This pattern of loading data into the cache only when the item is requested is often referred to as lazy loading. The advantage of this approach is that data that is populated in the cache has been requested and has a higher likelihood of being requested again. The alternative is to fill up a cache with data that is potentially never read. The disadvantage of lazy loading is the cache miss penalty on the first read of the data, which takes more time to retrieve the data from the table instead of directly from the cache.

Write-through cache
Similar to a read-through cache, a write-through cache also sits in-line with the database and updates the cache as data is written to the underlying data store. DAX is also a write-through cache because it caches (or updates) items with PutItem, UpdateItem, DeleteItem, and BatchWriteItem API calls as the data is written to or updated in DynamoDB. DAX is updated only if DynamoDB is successfully updated first (all of this is transparent to the application).

The following steps outline the process for a write-through cache:

  1. For a given key-value pair, the application writes to the DAX endpoint.
  2. DAX intercepts the write and then writes the key-value pair to DynamoDB.
  3. Upon a successful write, DAX hydrates the DAX cache with the new value so that any subsequent reads for the same key-value pair result in a cache hit. If the write is unsuccessful, the exception is returned to the application.
  4. The acknowledgement of a successful write is then returned to the application.

Write-through caches are advantageous, especially in conjunction with a read-through cache. They greatly simplify the use of caches—you no longer need to write or test cache population or invalidation logic. Because a write-through cache automatically caches the update, it introduces a slight amount of latency as compared to writing directly to an underlying data store itself. However, the advantage is that the data that is written is consistent with the underlying data store and is now available for reads. Similar to a read-through cache, a write-through cache is advantageous for read-heavy workloads and doesn’t help with latency or throughput for write-heavy workloads.

Some workloads in the IoT or ad tech space have a considerable amount of data that is written once and read never. In these scenarios, it often doesn’t make sense to use a write-through cache. If the cache is populated with data that is never read, it usually means that the utilization and cache/hit miss ratio is low, reducing the utility of the cache. To work around this issue (pun intended), you can employ a write-around pattern in which writes go directly to DynamoDB. Only the data that is read—and thus has a higher potential to be read again—is cached.

Write-back (or write-behind) cache
Whereas both read-through and write-through caches address read-heavy workloads, a write-back (or write-behind) cache is designed to address write-heavy workloads. Note that DAX is not a write-back cache currently, and we included this section for completeness. In this scenario, items are written to the cache and then asynchronously de-staged to the underlying data store.

If DAX were to function as a write-back cache, the process would be as follows:

  1. The item is written to the cache.
  2. The item is acknowledged by the cache, and success is returned to the application.
  3. As a background process, items are de-staged and written to DynamoDB.
  4. The cache acknowledges the write.

For use cases where data loss is prohibitive, the main drawback of write-back caches is the risk of data loss due to writes being asynchronous. You can typically mitigate this risk by increasing the number of replicas in the cache.

Cache eviction
DAX handles cache evictions in three different ways. First, it uses a Time-to-Live (TTL) value that denotes the absolute period of time that an item is available in the cache. Second, when the cache is full, a DAX cluster uses a Least Recently Used (LRU) algorithm to decide which items to evict. Third, with the write-through functionality, DAX evicts older values as new values are written through DAX. This helps keep the DAX item cache consistent with the underlying data store using a single API call.

Summary
As a read-through/write-through cache, DAX greatly simplifies the process of adding in-memory acceleration for read-heavy workloads to existing or new DynamoDB applications. If you require response times in microseconds or have unpredictable spikes in your DynamoDB requests for repeat reads, DAX can make your life a lot simpler. DAX is now generally available. For more information, see Amazon DynamoDB Accelerator (DAX).