Amazon MemoryDB for Redis Features
Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It is purpose-built for modern applications with microservices architectures.
Amazon MemoryDB is compatible with Redis, a popular open source data store, enabling customers to quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that they already use today. With Amazon MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. Amazon MemoryDB also stores data durably across multiple Availability Zones (AZs) using a distributed transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB can be used as a high-performance primary database for your microservices applications eliminating the need to separately manage both a cache and durable database.
Redis is a fast, open source, in-memory, key-value data store. Developers use Redis to achieve sub-millisecond response times, enabling millions of requests per second for real-time applications in industries like gaming, ad-tech, financial services, healthcare, and IoT. In 2021, Redis was named Stack Overflow’s “most loved database” for the fifth consecutive year.
Redis offers flexible APIs, commands, and data structures like streams, sets, and lists, to build agile and versatile applications. MemoryDB maintains compatibility with open source Redis and supports the same set of Redis data types, parameters, and commands that you are familiar with. This means that the code, applications, drivers, and tools you already use today with Redis can be used with MemoryDB so you can quickly build applications.
MemoryDB stores your entire dataset in memory to deliver microsecond read latency, single-digit millisecond write latency, and high throughput. It can handle more than 13 trillion requests per day and support peaks of 160 million requests per second.
Developers building with microservices architectures require ultra-high performance as these applications can involve interactions with many service components per user interaction or API call. With MemoryDB, you enable extreme low latency to deliver real-time performance for end users.
Amazon MemoryDB for Redis includes Enhanced IO Multiplexing, which delivers significant improvements to throughput and latency at scale. Enhanced IO Multiplexing is ideal for throughput-bound workloads with multiple client connections, and its benefits scale with the level of workload concurrency. As an example, when using r6g.4xlarge node and running 5200 concurrent clients, you can achieve up to 46% increased throughput (read and write operations per second) and up to 21% decreased P99 latency, compared with MemoryDB for Redis 6. For these types of workloads, a node's network IO processing can become a limiting factor in the ability to scale. With Enhanced IO Multiplexing each dedicated network IO thread pipelines commands from multiple clients into the Redis engine, taking advantage of Redis' ability to efficiently process commands in batches, as illustrated in the following diagram:
Enhanced IO Multiplexing is automatically available when using Redis 7, at no additional cost. No application or service configuration changes are required to use MemoryDB for Redis Enhanced IO Multiplexing.
For more information, see the documentaton.
In addition to storing your entire data set in memory, MemoryDB uses a distributed transactional log to provide data durability, consistency, and recoverability. MemoryDB stores data across multiple AZs so you can achieve fast database recovery and restart. You can use MemoryDB as a single, primary database service for your workloads requiring low-latency and high throughput instead of separately managing a cache for speed and an additional relational or nonrelational database for reliability.
You can scale your MemoryDB cluster to meet fluctuating application demands: horizontally by adding or removing nodes, or vertically by moving to larger or smaller node types. MemoryDB supports write scaling with sharding and read scaling by adding replicas. Your cluster continues to stay online and support read and write operations during resizing operations.
Easy to use
Getting started with MemoryDB is easy. Just launch a new MemoryDB cluster using the AWS Management Console, or you can use the AWS CLI or SDK. MemoryDB database instances are pre-configured with parameters and settings appropriate for the node type select. You can launch a cluster and connect your application within minutes without additional configuration.
Monitoring and metrics
MemoryDB provides Amazon CloudWatch metrics for your database instances. You can use the AWS Management Console to view over 35 key operational metrics for your cluster including compute, memory, storage, throughput, active connections, and more.
Automatic software patching
MemoryDB automatically keeps your clusters up-to-date with new updates, and you can easily upgrade your clusters to the latest versions of Redis.
MemoryDB runs in Amazon VPC, which allows you to isolate your database in your own virtual network and connect to your on-premises IT infrastructure using industry-standard, encrypted IPsec VPNs. In addition, using MemoryDB’s VPC configuration, you can configure firewall settings and control network access to your database instances.
With MemoryDB, data at-rest is encrypted using keys you create and control through AWS Key Management Service (KMS). And, clusters created with AWS Graviton2 node types include always-on 256-bit DRAM encryption. MemoryDB supports encryption in-flight using Transport Layer Security (TLS).
Using the AWS Identity and Access Management (IAM) features integrated with Amazon MemoryDB, you can control the actions that your AWS IAM users and groups can take on Amazon MemoryDB resources. For example, you can configure your IAM rules to help ensure that certain users only have read-only access, while an Administrator can create, modify, and delete resources. For more information about API-level Permissions, refer to Using AWS IAM Policies for Amazon MemoryDB.
Authentication and authorization
MemoryDB uses Redis Access Control Lists (ACLs) to control both authentication and authorization for your cluster. ACLs enable you to define different permissions for different users in the same cluster.
Integration with Kubernetes
AWS Controllers for Kubernetes (ACK) for Amazon MemoryDB enables you to define and use MemoryDB resources directly from your Kubernetes cluster. This lets you take advantage of MemoryDB to support your Kubernetes applications without needing to define MemoryDB resources outside of the cluster or run and manage in-memory database capabilities within the cluster. You can download the MemoryDB ACK container image from Amazon ECR and refer to the documentation for installation guidance. You can also visit the blog for more detailed information.
Note: ACK for Amazon MemoryDB is now generally available. Send us your feedback on our Github page.
Amazon MemoryDB for Redis enables machine learning (ML) and generative artificial intelligence (AI) models to work with data stored in Amazon MemoryDB in real-time and without moving your data. With Amazon MemoryDB, you can store, search, index, and query vector embeddings within Redis data structures.
Vectors are numerical representations of unstructured data, such as text, images, and video, created from ML models that help capture the semantic meaning of the underlying data. You can store vector embeddings from ML and AI models, such as those from Amazon Bedrock and Amazon SageMaker in your Amazon MemoryDB database. Read our documentation to learn more about vector search on Amazon MemoryDB.
With the preview of vector search for MemoryDB, you can store millions of vector embeddings and perform tens of thousands queries per second (QPS) at greater than 99% recall with single-digit millisecond vector search and update latencies.
Vector search for MemoryDB is suited toward use cases where peak performance and scale are the most important selection criteria. You can use vector search to power real-time ML and generative AI applications in use cases such as retrieval-augmented generation (RAG) for chat bots, fraud detection, real-time recommendations, and document retrieval.
MemoryDB offers data tiering as a lower cost way to scale your clusters up to hundreds of terabytes of capacity. Data tiering provides a price-performance option for MemoryDB by utilizing lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20% of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSDs.
When using clusters with data tiering, MemoryDB is designed to automatically and transparently move the least recently used items from memory to locally attached NVMe SSDs when available memory capacity is consumed. When you access an item stored on SSD, MemoryDB moves it back to memory before serving the request. MemoryDB data tiering is available on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% storage cost savings when running at maximum utilization compared to R6g nodes (memory only). Assuming 500-byte String values, you can typically expect an additional 450µs latency for read requests to data stored on SSD compared to read requests to data in memory.
MemoryDB offers reserved nodes that allow you to save up to 55% over on-demand node prices in exchange for a usage commitment over a one- or three-year term. Reserved nodes are complementary to MemoryDB on-demand nodes and give businesses flexibility to help reduce costs. MemoryDB provides three reserved node payment options — No Upfront, Partial Upfront, and All Upfront — that enable you to balance the amount you pay upfront with your effective hourly price.
MemoryDB reserved nodes offer size flexibility within a node family and AWS Region. This means that the discounted reserved node rate will be applied automatically to usage of all sizes in the same node family. The size flexibility capability reduces the time that you need to spend managing your reserved nodes and since you’re no longer tied to a specific database node size, you can get the most out of your discount even if your database needs updates.