Clients receive errors when querying an ElastiCache Redis cluster node that has utilized all available memory and is unable to free up memory per the maxmemory-policy that is configured for the cluster node.

The maximum available memory on an ElastiCache Redis node for cache data and other overhead is determined by the value of the maxmemory parameter. This value is specific to cache node type and cannot be modified.

When cache memory usage exceeds the value of the maxmemory parameter for the cache node type, ElastiCache Redis responds by implementing the maxmemory-policy that is set for the cache node’s parameter group.

You can set the maxmemory-policy for a cache node parameter group to the following values. The default value is volatile-lru, which attempts to free up memory by removing the least recently used keys first, but only those keys for which an expiration time, or TTL value, has been set:

  • noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used.
  • allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
  • volatile-lru: evict keys by trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
    Note: When this policy is in use on a cache node that does not contain any keys with a TTL value, the policy becomes functionally equivalent to the noeviction policy.
  • allkeys-random: evict random keys in order to make space for the new data added.
  • volatile-random: evict random keys in order to make space for the new data added, but only evict keys with an expire set.
  • volatile-ttl: in order to make space for the new data, evict only keys with an expire set, and try to evict keys with a shorter time to live (TTL) first.

To prevent clients from receiving “OOM command not allowed…” error messages, ensure that you are using the appropriate cache node type and maxmemory-policy parameter for your usage scenario. If your scenario does not entail caching keys with TTL values, consider setting a maxmemory-policy parameter that is not restricted to evicting only keys with TTL values, which is functionally equivalent to using the noeviction policy parameter. In this scenario, the cluster node is unable to free up memory by evicting keys, so other maxmemory-policy parameters should be set. Or, you might need to upgrade to a larger cache node type to prevent OOM error messages and also ensure that no keys are inadvertently deleted.

OOM, maxmemory, ElastiCache, not allowed, key expiration, TTL, Redis, cluster, cache, node, eviction, LRU


Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center

Published: 2016-1-27