AWS Database Blog
Category: Amazon ElastiCache
Monitor server-side latency for Amazon ElastiCache for Valkey
Modern applications are built as a group of microservices, and the latency for one component can impact the performance of the entire system. Monitoring latency is critical for maintaining optimal performance, enhancing user experience, and maintaining system reliability. In this post, we explore ways to monitor latency, detect anomalies, and troubleshoot high-latency issues effectively for your self-designed (node-based) ElastiCache clusters.
From caching to real-time analytics: Essential use cases for Amazon ElastiCache for Valkey
Valkey is an open-source, distributed, in-memory key-value data store that offers high-performance data retrieval and storage capabilities, making it an ideal choice for scalable, low-latency modern application development. Originating as a fork of Redis OSS following recent licensing changes, Valkey maintains full compatibility with its predecessor while providing high performance alternative for its developers. Valkey […]
Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency
Today, we are adding support for Valkey 8.0 on Amazon ElastiCache. ElastiCache version 8.0 for Valkey brings faster scaling for ElastiCache Serverless and memory optimizations for node-based clusters. In this post, we discuss these improvements and how you can benefit from them.
Use Amazon ElastiCache as a cache for Amazon Keyspaces (for Apache Cassandra)
In this post, we show you how to use Amazon ElastiCache as a write-through cache for an application that uses an Amazon Keyspaces (for Apache Cassandra) table to store data about book awards. We use a Cassandra Python client driver to access Amazon Keyspaces programmatically and a Redis client to connect to the ElastiCache cluster.
Optimize Amazon Aurora PostgreSQL auto scaling performance with automated cache pre-warming
When clients start running queries on new Amazon Aurora replicas, they will notice a longer runtime for the first few times that queries are run; this is due to the cold cache of the replica. As the database runs more queries, the cache gets populated and the clients notice faster runtimes. In this post, we focus on how to address the cold cache so clients that are connecting through a load-balanced endpoint get a consistent experience regardless of whether the replicas are automatically or manually scaled. In addition, we also look at other caching solutions such as Amazon ElastiCache, a fully managed Memcached, Redis, and Valkey compatible service, that can further improve the overall experience for latency-sensitive applications and, in some situations (such as higher cache hits), lead to less frequent auto-scaling events of the Aurora read replicas.
Get started with Amazon ElastiCache for Valkey
Today, Amazon ElastiCache announces support for Valkey version 7.2 with Serverless priced 33% lower and self-designed (node-based) clusters priced 20% lower than other supported engines. With ElastiCache Serverless for Valkey, customers can create a cache in under a minute and get started as low as $6/month. Valkey is an open source, high performance, key-value datastore […]
Amazon ElastiCache and Amazon MemoryDB announce support for Valkey
As of October 8th 2024, we’ve added support for Valkey 7.2 on Amazon ElastiCache and Amazon MemoryDB, our fully managed in-memory services. In this post, we discuss the AWS contributions to Valkey, AWS commitment to making Valkey more accessible for ElastiCache and MemoryDB customers, and how customers can start using it in their applications.
New – Size flexibility for Amazon ElastiCache reserved nodes
Amazon ElastiCache, a fully managed, Redis OSS- and Memcached-compatible caching service, now supports size flexibility for all its reserved node offerings, enabling your reserved node discount to apply across differently sized node types beyond the size specified in your reservation. With flexible reserved nodes, you no longer need to commit to a specific node size when purchasing a reservation, reducing the overhead of capacity planning and enabling you to right-size your clusters as your workloads and capacity needs change. In this post, we explain how you can use this new size flexibility feature to leverage discounted pricing on your ElastiCache clusters.
Introducing Valkey GLIDE, an open source client library for Valkey and Redis open source
We’re excited to announce Valkey General Language Independent Driver for the Enterprise (GLIDE), an open source permissively licensed (Apache 2.0 license) Valkey client library. In this post, we discuss the benefits of Valkey GLIDE.
Deploy Amazon ElastiCache for Redis clusters using AWS CDK and TypeScript
In this post, we show you all the prerequisites and steps to deploy an Amazon ElastiCache cluster using AWS CDK and TypeScript. We also show you how to deploy resources using Amazon ElastiCache for Redis Serverless.