AWS Database Blog

Measuring database performance of Amazon MemoryDB for Redis

Contributed by Jean Guyader, Sr. Software Engineering Manager and Kevin McGehee, Principal Software Engineer.

Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It’s compatible with Redis, a popular open-source data store, which enables you to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands you’re familiar with. With MemoryDB, all your data is stored in memory, which enables you to achieve low latency and high throughput. In addition to storing your entire dataset in memory, MemoryDB uses a distributed transactional log to provide Multi-AZ durability, consistency, and recoverability.

You can use MemoryDB to durably store user session information, chat and message queues, streaming IoT data, gaming leaderboards, and more. For example, customers building with microservices architectures across industries like media and entertainment, IoT, web and mobile, and more are using MemoryDB as a primary database to store messages durably between microservices for ultra-fast processing. Customers in the banking and finance industries have replaced their existing cache and database setup to store payment processing data with MemoryDB.

In this post, you learn about the performance (latency and throughput) of MemoryDB running on the latest generation of ARM-based R6g instances so that you can use MemoryDB to build high-performance applications.

Performance analysis

We launched MemoryDB clusters using a variety of node types to measure the performance of different workloads. Each cluster consisted of one primary node with one read replica and was pre-populated with sample data prior to the test runs. We ran the Redis default performance measurement tool (redis-benchmark) with 3 million keys, without any command pipelining, and used eight Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone as the primary node to direct traffic to the MemoryDB clusters. The two traffic profiles used below are representative of common customer workload patterns. Because a MemoryDB cluster is a Multi-AZ distributed system, you may observe some level of variance from the numbers in the tables below in identical setups.

With 100 client connections and 512 byte values, we saw microsecond read and single-digit millisecond write latencies across all the node types. Because MemoryDB provides Multi-AZ durability using a distributed transactional log, the write latency is expected to be higher than the read latency. We can observe that the latencies increase when we go from read-only or mixed to write-only workloads. Throughput follows the opposite trend, decreasing with the increase in latency. The following table summarizes our findings.

Workload type R6g Node type Throughput
(requests per second)
Latency p50 (milliseconds) Latency p90 (milliseconds)
Read only 16xlarge 322,274 0.3 0.4
12xlarge 314,979 0.3 0.4
8xlarge 301,615 0.3 0.4
4xlarge 300,497 0.3 0.4
2xlarge 301,757 0.3 0.4
xlarge 161,569 0.6 0.7
large 156,518 0.6 0.8
Write only 16xlarge 24,892 4.1 4.6
12xlarge 29,487 3.4 3.8
8xlarge 27,263 3.7 4.2
4xlarge 28,499 3.5 4.1
2xlarge 22,313 4.5 5.1
xlarge 25,695 3.9 4.7
large 22,178 4.5 5.8
Mixed (80% read, 20% write) 16xlarge 108,373 0.3 3.7
12xlarge 115,311 0.2 3.4
8xlarge 99,105 0.2 4.2
4xlarge 113,632 0.2 3.5
2xlarge 89,313 0.3 4.4
xlarge 84,479 0.4 4.1
large 75,822 0.4 5.0

With 200 client connections and 100 byte key values, we see higher throughput with minimal impact to latency. For read-only workloads, throughput increases minimally (up to 10%) as we approach Redis requests per second limit for a single instance. For mixed and write-only workloads, we observe higher throughput (up to 111% for mixed and up to 143% for write-only) because a larger number of clients allows higher concurrency to the backend Multi-AZ distributed transaction log. The following table summarizes these findings.

Workload type R6g Node type Throughput
(requests per second)
Latency p50
(milliseconds)
Latency p90
(milliseconds)
Read only 16xlarge 350,953 0.5 0.7
12xlarge 345,655 0.5 0.7
8xlarge 302,305 0.6 0.8
4xlarge 325,719 0.6 0.8
2xlarge 324,710 0.6 0.8
xlarge 163,025 1.2 1.4
large 160,534 1.2 1.4
Write only 16xlarge 46,560 4.2 4.6
12xlarge 54,879 3.5 4.0
8xlarge 47,495 4.1 4.5
4xlarge 52,216 3.7 4.1
2xlarge 54,213 3.6 4.1
xlarge 45,646 4.3 5.3
large 45,284 4.2 5.3
Mixed (80% read, 20% write) 16xlarge 190,628 0.3 4.0
12xlarge 211,098 0.4 3.2
8xlarge 175,788 0.4 4.2
4xlarge 188,390 0.4 3.8
2xlarge 188,543 0.4 3.7
xlarge 116,908 1.0 4.7
large 99,204 1.0 5.7

Based on our results, we can see that MemoryDB nodes delivers low latencies (microsecond read and single-digit millisecond write latency) and high throughput across a variety of different node types and workloads.

Summary

We are excited to bring MemoryDB to you to build high-performance applications. To get started, you can create a MemoryDB cluster in minutes through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS Software Development Kit (SDK). Additionally, you can migrate to MemoryDB using the snapshot and restore capability or use AWS Database Migration Service (AWS DMS) to migrate data from any AWS DMS supported sources to MemoryDB as a target with minimal downtime. To learn more, refer to the MemoryDB documentation. If you have any questions or feedback, reach out to us at memorydb-help@amazon.com.


About the author

Karthik Konaparthi is a Senior Product Manager in Amazon In-Memory Databases team and is based in Seattle, WA. He is passionate about all things data and spends his time working with customers to understand their requirements and building exceptional products. In his spare time, he enjoys traveling to new places and spending time with his family.

Vijay Michael Joseph is a Senior Software Engineer in Amazon In-Memory Databases team, based in Vancouver, BC. He has spent a large part of his career working on optimizing systems, be it device drivers, video game engines, or in-memory databases. Outside work, he loves tinkering in little sandbox of Raspberry PIs, ESP32s and Arduinos.