Listing Thumbnail

    Redis Cloud - Annual

     Info
    Sold by: Redis 
    Deployed on AWS
    *Only for accepting private offers* From the creators of Redis, Redis Cloud on AWS is a fully managed serverless database-as-a-service trusted by thousands of customers for sub-millisecond latency, 100s of TBs and 100s of millions of ops/sec scalability, up to 99.999% uptime SLA, and best-in-class support.
    4.5

    Overview

    This product listing is for savings plans on annual commits for Redis Cloud.

    TRY REDIS® CLOUD PAY-AS-YOU-GO WITH 14-DAY FREE TRIAL HERE: https://aws.amazon.com/marketplace/pp/prodview-mwscixe4ujhkq 

    From the creators of Redis, unlike other hosted Redis services, Redis Enterprise adds additional enterprise features to Redis OSS that enables Redis as a real-time data platform and a primary database. Go beyond caching with enterprise modules, clustering, Active-Active geo-replication, and more that enables the following capabilities:

    1. Modern Apps Support: Enables multi-model capabilities such as search, JSON, TimeSeries, Bloom filters, and Graph
    2. Scalability: deploy multiple Redis instances on a single cluster node enabling 100s of millions of ops/sec
    3. Durable and Disaster Resilient: protect against data loss, and enable fast recovery in case of complete outages
    4. Global low latency: up to 99.999% uptime SLA for global applications with Active-Active
    5. Attractive TCO: Auto Tiering supports large datasets for cost-effectiveness

    Redis Cloud for AWS Marketplace includes:

    • Redis Cloud as a line item on a unified AWS bill, with spend counting toward your AWS commit.
    • Our team of experienced engineers will proactively act as an extension of your team by diagnosing, detecting, and troubleshooting potential issues.
    • All enterprise features that are available with an annual subscription.

    For questions and custom pricing please contact us at aws@redis.com .

    Highlights

    • Developers: Only Redis Enterprise provides true multi-model database capabilities such as Search, JSON, Graphs, Bloom Filters, TimeSeries and AI.
    • DevOps: True high availability (up to 99.999%), geo-distribution enabling local latency (sub 1ms) at global scale for Terabytes of data, and attractive TCO.
    • Cloud Architects: Flexible data with hybrid and multi-cloud implementations for maximum flexibility and optionality. No data or vendor lock-in. Redis Cloud is the only option.

    Details

    Sold by

    Delivery method

    Deployed on AWS
    New

    Introducing multi-product solutions

    You can now purchase comprehensive solutions tailored to use cases and industries.

    Multi-product solutions

    Features and programs

    Trust Center

    Trust Center
    Access real-time vendor security and compliance information through their Trust Center powered by Drata. Review certifications and security standards before purchase.

    Buyer guide

    Gain valuable insights from real users who purchased this product, powered by PeerSpot.
    Buyer guide

    Financing for AWS Marketplace purchases

    AWS Marketplace now accepts line of credit payments through the PNC Vendor Finance program. This program is available to select AWS customers in the US, excluding NV, NC, ND, TN, & VT.
    Financing for AWS Marketplace purchases

    Pricing

    Redis Cloud - Annual

     Info
    Pricing is based on the duration and terms of your contract with the vendor, and additional usage. You pay upfront or in installments according to your contract terms with the vendor. This entitles you to a specified quantity of use for the contract duration. Usage-based pricing is in effect for overages or additional usage not covered in the contract. These charges are applied on top of the contract price. If you choose not to renew or replace your contract before the contract end date, access to your entitlements will expire.
    Additional AWS infrastructure costs may apply. Use the AWS Pricing Calculator  to estimate your infrastructure costs.

    12-month contract (1)

     Info
    Dimension
    Description
    Cost/12 months
    Overage cost
    Redis Cloud
    Contact Redis for private offer
    $100,000.00

    Additional usage costs (1)

     Info

    The following dimensions are not included in the contract terms, which will be charged based on your usage.

    Dimension
    Cost/unit
    Redis Cloud Data Transfer
    $0.01

    Vendor refund policy

    Please contact seller support team for refund details.

    Custom pricing options

    Request a private offer to receive a custom quote.

    How can we make this page better?

    We'd like to hear your feedback and ideas on how to improve this page.
    We'd like to hear your feedback and ideas on how to improve this page.

    Legal

    Vendor terms and conditions

    Upon subscribing to this product, you must acknowledge and agree to the terms and conditions outlined in the vendor's End User License Agreement (EULA) .

    Content disclaimer

    Vendors are responsible for their product descriptions and other product content. AWS does not warrant that vendors' product descriptions or other product content are accurate, complete, reliable, current, or error-free.

    Usage information

     Info

    Delivery details

    Software as a Service (SaaS)

    SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.

    Resources

    Support

    Vendor support

    Enjoy our 24/7 support via our online helpdesk or phone

    AWS infrastructure support

    AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.

    Product comparison

     Info
    Updated weekly
    By Supported Images
    By SingleStore

    Accolades

     Info
    Top
    10
    In Databases & Analytics Platforms, Databases, Generative AI
    Top
    10
    In Analytic Platforms, Databases & Analytics Platforms, Databases

    Customer reviews

     Info
    Sentiment is AI generated from actual customer reviews on AWS and G2
    Reviews
    Functionality
    Ease of use
    Customer service
    Cost effectiveness
    0 reviews
    Insufficient data
    Insufficient data
    Insufficient data
    Insufficient data
    Positive reviews
    Mixed reviews
    Negative reviews

    Overview

     Info
    AI generated from product descriptions
    Multi-Model Database Capabilities
    Supports multiple data models including Search, JSON, Graphs, Bloom Filters, TimeSeries, and AI capabilities
    High Availability and Uptime
    Provides up to 99.999% uptime SLA with Active-Active geo-replication for global applications
    Scalability and Performance
    Enables deployment of multiple Redis instances on a single cluster node supporting hundreds of millions of operations per second with sub-millisecond latency
    Data Durability and Disaster Recovery
    Protects against data loss and enables fast recovery in case of complete outages with durable and disaster-resilient architecture
    Cost Optimization
    Includes Auto Tiering feature to support large datasets cost-effectively while maintaining performance
    In-Memory Data Structure Store
    Supports sub-millisecond latencies with in-memory storage of various data types including strings, hashes, lists, sets, and sorted sets for real-time data processing.
    Persistence and Replication
    Configurable persistence models with replication capabilities to maintain data durability while benefiting from in-memory speed and enabling data redundancy.
    High Availability and Failover
    Redis Sentinel provides automatic failover and monitoring to ensure uninterrupted service and high availability for critical applications.
    Horizontal Scalability
    Redis Cluster implementation enables horizontal scaling to handle high volumes of requests and support growing data demands.
    Security Features
    Built-in access control lists and encryption capabilities to safeguard data and comply with industry security best practices.
    Distributed SQL Database Architecture
    Fully managed, distributed SQL database with lock-free cloud-native architecture designed for transactional (OLTP) and analytical (OLAP) workloads
    High-Throughput Data Ingestion
    Parallel, distributed lock-free ingestion capable of processing millions of events per second using Pipelines
    Vector Search Capabilities
    Indexed vector search with full-text search capabilities for generative AI applications with elastic scale-out architecture
    Real-Time Query Processing
    Low-latency SQL query execution on billions of rows of data with support for tens or hundreds of thousands of concurrent users
    Unified Workload Engine
    Single engine supporting transactional (OLTP), analytical (OLAP), and vector (GenAI) workloads without requiring data movement between systems

    Contract

     Info
    Standard contract
    No
    No

    Customer reviews

    Ratings and reviews

     Info
    4.5
    107 ratings
    5 star
    4 star
    3 star
    2 star
    1 star
    72%
    24%
    2%
    0%
    2%
    11 AWS reviews
    |
    96 external reviews
    External reviews are from G2  and PeerSpot .
    Rituraj NSIT

    Caching has improved response times and reduces database load for high-traffic applications

    Reviewed on Apr 08, 2026
    Review from a verified AWS customer

    What is our primary use case?

    My main use case for Redis  is caching to improve application performance and reduce database load.

    One specific example from my backend services is using Redis  to cache frequently accessed data like product details. Instead of querying the database every time, the application first checks Redis. If data is present, it returns instantly, which significantly reduces the database load and improves response time.

    Apart from cache, I have also used Redis for session storage and rate limiting. It helps in managing user sessions efficiently and controlling traffic spikes, which improves overall system reliability.

    What is most valuable?

    Redis stands out for its extremely fast in-memory performance, support for rich data structures such as string, hash, and list, and features such as TTL for automatic expiration. It is also very useful for caching, sessions management, and rate limiting. I rely mostly on the fast memory performance combined with caching, which helps reduce database load and improve response time for frequently accessed data.

    Redis has played a key role in improving system scalability and performance. By offloading frequent reads from the database and enabling fast in-memory cache access, it reduced latency, improved throughput, and helped maintain stability during peak loads.

    What needs improvement?

    Redis is very reliable, but it could be improved in areas such as monitoring, debugging, and feasibility into memory use. Better built-in tools for observability would help teams manage it more effectively at scale. Managing memory efficiently and troubleshooting issues can sometimes require additional tooling, so these areas can also be improved.

    One practical challenge I experienced is managing memory efficiently. Since Redis is in-memory, we need to carefully configure eviction policies and monitor usage. Debugging  cache-related issues such as stale data or cache invalidation can sometimes be tricky. Additionally, tuning memory usage and eviction policies needs to be planned very carefully.

    For how long have I used the solution?

    I have been using Redis for the last two years.

    What do I think about the stability of the solution?

    Redis is quite stable.

    What do I think about the scalability of the solution?

    Redis is very scalable. It supports both vertical and horizontal scaling, and with features such as clustering and replication, it can handle high traffic and a large database very effectively.

    How are customer service and support?

    The customer support I have experienced has been good overall. Since Redis is quite stable and well-documented, we have not needed much support, but when required, the response has been helpful.

    Which solution did I use previously and why did I switch?

    Before choosing Redis, we mainly relied on database-level caching or direct queries. As the application scaled, it started impacting performance, so we switched to Redis for its speed and better caching capabilities.

    Before Redis, we relied on the normal database, but before we considered Redis, we looked at a few alternatives such as Memcached. Redis stood out because of its richer data structures and additional features such as persistence and pub/sub features.

    What was our ROI?

    We have seen a strong ROI after implementing Redis. We reduced the database read load by around 30 to 40 percent and improved API response time by 20 to 30 percent, specifically for frequently accessed endpoints.

    What's my experience with pricing, setup cost, and licensing?

    The pricing is reasonable for the performance provided. Since we use it as a managed service, there is no licensing complexity, and setup costs were minimal. Most of the cost depends on the use cases and scaling, which was beneficial for us.

    What other advice do I have?

    Redis is very reliable and easy to integrate. Its simplicity combined with the performance makes it a great choice for backend developers.

    My advice would be to first clearly define your use cases, specifically for caching or real-time scenarios, and also pay attention to memory management. Choose the right eviction policies and implement proper monitoring from the beginning. Plan for memory optimization, set appropriate TTLs, and implement strong monitoring and alerting for stability at any scale.

    Redis is a powerful and reliable tool for improving application performance. Its speed and flexibility make it a great choice for modern backend systems. It significantly improves performance and scalability with proper planning. It works very effectively for high-traffic applications. I would rate this product an 8 out of 10.

    Which deployment model are you using for this solution?

    Public Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Varuns Ug

    Caching has accelerated complex workflows and delivers low latency for high-traffic microservices

    Reviewed on Apr 03, 2026
    Review from a verified AWS customer

    What is our primary use case?

    I have used Redis  for around four years. I have completed several projects using Redis . At Paytm, I used it for caching and performance optimization, and then I used it at MakeMyTrip for a multi-layer caching architecture.

    At MakeMyTrip, I am using Redis for a multi-layer caching architecture. In one of my recent projects, I used Redis as a distributed L2 cache for storing frequently accessed data and reducing downstream service calls, which significantly improves latency and system throughput. In a hotel cancellation policy system, I aggregate data from several microservices including inventory, partner system, and internal policy service. These calls are expensive and add latency. I cache the final computed policy response in Redis with a TTL of around five minutes.

    For the five-minute TTL for the cache, the decision was based on balancing data freshness and performance. The cancellation policy does not change very frequently, but when it does, it must reflect responsively and quickly. I analyzed the update frequency versus the request volume, and five minutes provided a good trade-off. Most reads can be served from cache while keeping the data sufficiently fresh. I complemented the TTL with event-driven invalidation for critical updates. In cases where policy changes, I do not have to wait for the TTL to expire.

    Apart from caching, I have completed several other use cases of Redis at MakeMyTrip. One was rate limiting, where I use Redis to control traffic at a per-user or per-partner level to protect downstream services. I leverage Redis fast atomic operations to maintain counters and enforce limits without adding latency. I also use Redis for temporary state management, especially in scenarios where I need to store short-lived intermediate data between multi-step flows. I use Redis as an in-memory solution and it is very fast. Another aspect I focus on is cache design and observability to ensure proper key structuring, monitor cache hit-miss ratios, and tune TTL based on traffic patterns. This helps me continuously optimize performance and avoid issues such as stale data or cache stampede.

    What is most valuable?

    A few features of Redis that I use on a day-to-day basis and feel are among the best are extremely low latency and high throughput. Since Redis is in-memory, it makes it ideal for cases such as caching and rate limiting where response time is critical. TTL expiry support is very useful in Redis as it allows me to automatically evict stale data without manual cleanup, which is something I use heavily in my caching strategy. Another point I can mention is that the rich data structures such as strings, hashes, and even sorted sets are very powerful. I have used strings for caching responses and counters, whereas I have used hashes for storing structured objects. One more feature I can tell you about is atomic operations. Redis guarantees atomicity for operations such as incrementing a counter, which is very useful for rate limiting and avoiding race conditions in distributed systems. Finally, I want to emphasize that Redis is easy to scale and integrate, whether through clustering or using a distributed cache across microservices.

    Redis has impacted my organization positively by providing default support that is very useful. For metrics, in one of my core systems, introducing Redis as a distributed cache helped me achieve around an 80% cache hit rate, which reduced repeated downstream services. Real API latency also improved from around two seconds to approximately 450 milliseconds for P99. It also helped reduce the load on dependent services and databases, which improved overall system reliability.

    What needs improvement?

    There are some points where I feel Redis can be improved. One issue is cache invalidation. Keeping cache data consistent with the source of truth can be tricky, especially in distributed systems. I address this using a combination of TTL-based expiry and event-driven invalidation, but it still requires careful design. Another point I want to add is memory management. Since Redis is in-memory, storing large and improperly structured data can quickly increase memory usage and costs. I had to optimize key design, data size, and eviction policies such as LRU to manage it effectively.

    For how long have I used the solution?

    I have been working in my current field for around four and a half years.

    What do I think about the stability of the solution?

    In my experience, Redis is highly stable.

    What do I think about the scalability of the solution?

    Redis scalability in my environment is quite good. It is highly scalable. I scale Redis horizontally using clustering and sharding, where data is distributed across multiple nodes to handle higher traffic and larger data sets. This helps avoid bottlenecks and ensures consistent performance even as load increases. I use replica nodes to handle read traffic and improve availability. For high throughput scenarios, this allows me to offload reads from the primary node and maintain low latency.

    How are customer service and support?

    Regarding customer support, I have not directly engaged with Redis customer support very often, mainly because I use it as a managed service and most operational issues are handled internally by my infrastructure team. From an application perspective, Redis has proven to be quite stable and predictable. Most issues I encounter, such as cache misses or memory pressure, I handle through monitoring, tuning, and design improvements. The documentation and community support for Redis are very strong, making troubleshooting quicker. For deeper infrastructure-level issues, my platform team typically coordinates with cloud provider support.

    Which solution did I use previously and why did I switch?

    Before Redis, I primarily relied on direct database queries and some in-memory caching solutions such as Guava. The main issue was that this approach increased latency and added higher loads on downstream services and databases, especially for frequently accessed or aggregated data. In some cases, repeated calls to multiple microservices made APIs slow and less reliable during peak traffic. Switching to Redis solved these issues effectively.

    What was our ROI?

    The return on investment with Redis is clearly evident. For example, from a system perspective, Redis helped me achieve around an 80% cache hit rate, which reduces repeated downstream calls, as I mentioned earlier. It improved API latency from two seconds to 450 milliseconds for P99. From a productivity standpoint, it significantly reduced manual troubleshooting and performance firefighting. Many latency and load issues were absorbed by the caching layer, and in some workflows, automation and caching together reduced manual intervention by about 60 to 80%. This allowed my team to focus on building features instead of handling operational issues.

    What's my experience with pricing, setup cost, and licensing?

    I have not been directly involved in the pricing aspect, but I have seen that the costs are primarily driven by memory consumption and cluster size, since Redis operates in-memory. Because of that, I am quite careful about optimizing data size and choosing appropriate TTLs to avoid unnecessary cache bloat. I was not directly involved in pricing decisions, but I did contribute to cost efficiency through better cache design and memory optimization.

    Which other solutions did I evaluate?

    I had a few options to consider before choosing Redis, but one option was to rely more on database-level optimizations such as indexing or query tuning, which did not solve the problems related to repeated reads and high latency. In-memory caches such as Guava worked well locally but do not scale across multiple instances since they are not sharded. As for distributed caching, I also considered Memcached. However, Redis stood out because of its richer data structures, built-in TTL support, atomic operations, and better flexibility for use cases such as rate limiting and structured caching.

    What other advice do I have?

    My advice for others looking into using Redis is to design caching carefully. Focus on good key data structures, appropriate TTLs, and a clear invalidation strategy because cache consistency is often the biggest challenge I face in Redis. Be mindful of memory use since Redis is in-memory, and optimize data size and eviction policies accordingly.

    I have shared most of my experience with Redis previously. Overall, I want to say that Redis truly adds value, especially for low latency and high throughput use cases. Redis is extremely powerful, but to realize its full potential, it requires careful design around data and traffic patterns. I would rate this review an 8.

    Which deployment model are you using for this solution?

    Hybrid Cloud

    If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

    Ravi Raushan Kumar

    Caching and session design has improved performance and now supports high-traffic workloads

    Reviewed on Mar 27, 2026
    Review from a verified AWS customer

    What is our primary use case?

    My main use case for Redis  is caching frequently accessed data to improve performance and reduce database load. For example, I cache API responses and user-related data so that repeated requests can be served quickly without hitting the database every time. I use TTL to automatically expire stale data and ensure caching freshness. In some cases, I also use Redis  for session management and handling short-lived data efficiently.

    I have used Redis for session management in a back-end system, where the main idea was to store user session data in Redis instead of keeping it in memory on a single server, which helps me scale across multiple instances. When a user logs in, we generate a session ID or token and store session-related data like user ID and metadata in Redis, and this session is associated with a TTL. It automatically expires after a certain period of time or after a certain time of inactivity. On each request, the session ID is validated by fetching data from Redis, which is very fast due to its in-memory nature, ensuring low latency and allowing us to handle the highest traffic efficiently. This approach helps us achieve horizontal scalability and avoids issues concerning session stickiness. Additionally, we ensure security by expiring inactive sessions or occasionally refreshing TTL for active users.

    Apart from caching and session management, I worked on interesting challenges using Redis, particularly around caching consistency and handling stale data. Initially, we faced issues where cached data would become outdated after database updates, and to solve this, we implemented a cache-aside strategy where we explicitly invalidated or updated the cache whenever the underlying data changed. Another scenario was handling cache misses during high traffic to avoid multiple requests hitting the database simultaneously, where we introduced techniques such as setting approaches, TTLs, and in some cases, using locking to ensure only one request rebuilds the cache. We also tuned invocation policies and memory usage to ensure Redis remains performant under load. These experiences helped me understand how to use Redis not just as a cache, but as a critical component in system performance and scalability. For maintaining the high traffic system, we also explored using Redis for rate limiting and short-lived counters, which further reduced our load on our core system.

    What is most valuable?

    The best features Redis offers are the ones that stand out most based on real-world usage. First is its in-memory preference, as Redis is extremely fast, making it ideal for caching and session management where low latency is critical. Second, it supports multiple data structures such as strings, hashes, lists, and sets, which are very powerful. I have used hashes for storing session data and structured objects efficiently. Another key feature is TTL, which allows automatic expiration of keys; this is very useful for managing sessions and ensuring stable cache, as stale cache data gets cleaned up without manual intervention. I also find Redis very useful for distributed systems because it acts as a centralized store that multiple services can access consistently. Overall, its simplicity, speed, and flexibility make it a very effective tool for performance and scalability improvement.

    Using data structures such as hashes in Redis made the implementation much cleaner and more efficient. For session management, instead of storing the entire session as a serialized object, we used a Redis hash where each field represents a session attribute such as user ID, login time, and roles. This allowed us to update specific fields without rewriting the whole object, which improved performance and flexibility. Hashes are also memory efficient compared to storing multiple keys, helping us optimize memory usage when handling a large number of sessions. A specific scenario where TTL helped was with session expiration; instead of building a separate cleanup object to remove inactive sessions, we simply set a TTL on each session key, allowing Redis to automatically remove the expired sessions. This reduces operational overhead and avoids stale session buildup. Without TTL, we would have needed a background scheduler or a cron job to help clean up expired sessions, which adds complexity and potential failure points. Redis handled it natively and very efficiently.

    Using Redis has had a specific positive impact on our system performance and scalability. The biggest improvement is in response time; by caching frequently accessed data, we reduce the API latency from database level milliseconds to sub-millisecond responses in many cases. It also helps significantly reduce the database load, especially during peak traffic, improving overall system stability and preventing bottlenecks. From a scalability perspective, Redis enables us to handle higher traffic without needing to scale the database proportionally, making the system more cost-efficient.

    What needs improvement?

    Overall, Redis is a powerful and reliable tool, but there are a few areas for improvement. One limitation is that Redis is memory-based, so scaling can become expensive compared to disk-based systems. While it offers persistence options, it is not always ideal for large datasets where cost efficiency is critical. Another area is cache consistency; Redis itself does not enforce consistency with the primary database, so developers need to carefully design cache invalidation strategies. More built-in mechanisms or patterns to simplify this would be helpful.

    Additional areas where Redis could improve include monitoring, security, and ease of use in large-scale ecosystems. From a monitoring perspective, while Redis provides basic metrics, deep visibility into issues such as memory fragmentation, hot keys, or latency spikes often requires external tools; more built-in, user-friendly options would make diagnosing production issues quicker. Regarding security, Redis has improved over time, but historically, it required careful configurations; features such as authentication and encryption exist but are not always enabled by default, posing a risk if not properly set up. A strong, secure by default configuration would be beneficial. In terms of ease of use, while Redis is straightforward for basic use cases, managing clusters and persistence strategies can become complex at scale, so better abstractions or tooling for distributed setups and operations would make it more developer-friendly.

    For how long have I used the solution?

    I have been using Redis for the last three years, and it is a part of my back-end development work where I mainly use it as a caching layer to improve my application's performance and reduce database load.

    What other advice do I have?

    My main advice for those looking into using Redis is to focus on the use case; Redis excels where low latency is critical, such as caching, session management, or real-time features, rather than using it as a primary database for everything. Pay close attention to the caching design, especially cache invalidation and TTL strategies; poorly designed caches can lead to stale data or inconsistency. Plan for scalability and failure scenarios early; decide how you will handle Redis downtime. If possible, consider using a managed service such as those from Amazon Web Services  to reduce operational overhead and focus more on application logic.

    I find Redis particularly valuable because of how versatile it is. Many people think it is only a key-value pair cache, but its support for atomic operations and different data structures makes it useful for solving various real-world problems. For example, features such as atomic increment operations are extremely useful for building things such as rate limiting or counters without worrying about race conditions. Another underrated aspect is how simple yet powerful TTL and expiration handling are, eliminating the need for complex cleanup logic, which can otherwise introduce bugs or operational overhead. I also think more people should leverage Redis for lightweight distributed coordination, such as using Redis for distributed locks or request duplication, which can simplify system design when multiple services are involved.

    Using Redis has definitely helped us improve cost efficiency. One of the main impacts was reducing the load on primary databases since a large portion of read requests is served from Redis, so we did not need to scale the database so aggressively, which saved costs on computing and storage. We also observed fewer database connections and queries, leading to lower CPU usage and lower input-output usage, which reduced the need for high-end database instances. For example, during peak traffic, instead of increasing database capacity, Redis absorbed most of the repeated requests, helping us delay or even avoid additional infrastructure provisioning, which directly reduces costs. Of course, Redis itself adds some cost since it requires memory, but the overall savings from reduced database load and improved efficiency outweigh the cost in our case.

    Overall, my experience with Redis has been very positive, and it has played a key role in improving performance, scalability, and system responsiveness in our back-end system. What stands out to me is its simplicity combined with powerful capabilities; it is easy to get started with but also flexible enough to handle more advanced uses such as caching, session management, and real-time processing. The key is to use it thoughtfully, specifically regarding caching design and understanding its potential. When used correctly, it delivers significant value, and it is definitely a tool I would continue to use in future systems. I would rate my overall experience with Redis as a nine out of ten.

    Pawan M.

    Low-Latency Key Store That Excels at Session Management

    Reviewed on Mar 12, 2026
    Review provided by G2
    What do you like best about the product?
    Its one of the best software out there for the low latency key store. It comes handy for managing and maintaining session information in the security layer of a distributed web application.
    What do you dislike about the product?
    The redis sentinal based distributed deployment in container runtime is bit difficult to configure.
    What problems is the product solving and how is that benefiting you?
    It has very low latency key store with stores in-memory and it has an inbuilt TTL features which is highly useful for application which need high speed access to application states.
    Yash D.

    Effortless Scalability with Active-Active Geo-Replication

    Reviewed on Dec 20, 2025
    Review provided by G2
    What do you like best about the product?
    I really like the Active-Active geo-distribution feature in Redis Software. It mirrors writes from our Haryana data center to our Mumbai replica in under 50ms and automatically handles failovers. This setup handled double the normal traffic during Diwali, managing over 2 million session tokens without a hiccup. Redis Software's Redis on Flash also significantly cut our RAM usage by 60%, which is a huge benefit for our large datasets. The initial setup was surprisingly smooth too, with the installer getting a 3-node cluster running on our Kubernetes setup in less than an hour using a guided UI, greatly reducing the manual configuration workload. This lets our team focus more on the application layer rather than infrastructure headaches.
    What do you dislike about the product?
    I find the steep licensing (~₹50K/node/year) challenging for SMBs after the trial compared to fully open-source stacks. The UI dashboard lags on clusters with more than 100 nodes—kubectl metrics outperform Redis Insight for real-time CI/CD monitoring, which forces us to use hybrid tooling during deployments.
    What problems is the product solving and how is that benefiting you?
    Redis Software eliminates OOM crashes on large datasets and saves 60% RAM with Redis on Flash. It provides sub-1ms p99 latencies for sessions and writes, and Active-Active geo-distribution syncs between data centers in under 50ms, making operations seamless.
    View all reviews