AWS Database Blog

Category: Amazon MemoryDB

Monitor server-side latency for Amazon MemoryDB for Valkey

Amazon MemoryDB is a Valkey– and Redis OSS-compatible, durable, in-memory database service that delivers ultra-fast performance. With MemoryDB, data is stored in memory with Multi-AZ durability, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. MemoryDB is often used for building durable microservices and latency-sensitive database workloads such as […]

Get started with Amazon MemoryDB for Valkey

Today, Amazon MemoryDB announces support for Valkey version 7.2, with 30% lower instance hour pricing as compared to Amazon MemoryDB for Redis OSS. With MemoryDB for Valkey, there is no charge for data written up to 10 TB per month, and then billed at $0.04/GB for any data written over 10 TB. Valkey is an […]

Amazon ElastiCache and Amazon MemoryDB announce support for Valkey

As of October 8th 2024, we’ve added support for Valkey 7.2 on Amazon ElastiCache and Amazon MemoryDB, our fully managed in-memory services. In this post, we discuss the AWS contributions to Valkey, AWS commitment to making Valkey more accessible for ElastiCache and MemoryDB customers, and how customers can start using it in their applications.

Improve speed and reduce cost for generative AI workloads with a persistent semantic cache in Amazon MemoryDB

In this post, we present the concepts needed to use a persistent semantic cache in MemoryDB with Knowledge Bases for Amazon Bedrock, and the steps to create a chatbot application that uses the cache. We use MemoryDB as the caching layer for this use case because it delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. We use Knowledge Bases for Amazon Bedrock as a vector database because it implements and maintains the RAG functionality for our application without the need of writing additional code.

Power real-time vector search capabilities with Amazon MemoryDB

In today’s rapidly advancing world of generative artificial intelligence (AI), businesses across diverse industries are transforming customer experiences through the power of real-time search. By harnessing the untapped potential of unstructured data ranging from text to images and videos, organizations are able to redefine the standards of engagement and personalization. A key component of this […]

Key considerations when choosing a database for your generative AI applications

In this post, we explore the key factors to consider when selecting a database for your generative AI applications. We focus on high-level considerations and service characteristics that are relevant to fully managed databases with vector search capabilities currently available on AWS. We examine how these databases differ in terms of their behavior and performance, and provide guidance on how to make an informed decision based on your specific requirements.

Solutions for building modern applications with Amazon ElastiCache and Amazon MemoryDB for Redis

In-memory databases are ideal for applications that require microsecond response times and high throughput, such as caching, gaming, session stores, geo-spatial services, queuing, real-time data analytics and feature stores for machine learning (ML). In this In-Memory Database Bluebook, we provide you with a list of Amazon ElastiCache and Amazon MemoryDB for Redis code samples and […]

The role of vector databases in generative AI applications

August, 2024: This post has been updated to reflect advances in technology and new features AWS released, to help you on your generative AI journey. Generative artificial intelligence (AI) has captured our imagination and is transforming industries with its ability to answer questions, write stories, create art, and generate code. AWS customers are increasingly asking […]

Migrate data from Amazon Aurora PostgreSQL to Amazon MemoryDB for Redis using AWS DMS

A common challenge customers face as their business grows is providing the same level of service to their end-users. Most often, databases become bottlenecks as usage outgrows capacity. Caching strategies may help improve performance by offloading frequently used data to a cache like Redis. This requires additional overhead in keeping your cache up to date. […]