AWS Architecture Blog
Doing Constant Work to Avoid Failures
Amazon Route 53’s DNS Failover feature allows fast, automatic rerouting at the DNS layer based on the health of some endpoints. Endpoints are actively monitored from multiple locations and both application or connectivity issues can trigger failover.
Trust No One
One of the goals in designing the DNS Failover feature was making it resilient to large-scale outages. Should a large subset of targets, such as an entire availability zone, fall off the map, the last thing we’d want to do is accumulate a backlog of processing work, delaying the failover. We avoid incurring delays in such cases by making sure that the amount of work we do is bounded regardless of the runtime conditions. This means bounded workload during all stages of processing: data collection, handling, transmission, aggregation and use. Each of these stages requires some thought for it to not cause delays under any circumstances. On top of its own performance, each stage of the pipeline needs to have its output cardinality bounded in a way to avoid causing trouble for the next stage. For extra protection, that next stage should not trust its input to be bounded and implement safety throttling, preferably, isolating input by type and tenant or by whatever dimension of the input may be variable due to a bug or misconfiguration, such as configuring extra previous stages, each producing output, unexpectedly.
Let’s dig into how it works out for our health checking service “data plane,” the part that does all the work once health checks have been configured through the “control plane.” As an aside, we deploy our health checking data plane and control planes similar to the methods described previously.
Health Checkers Example
The frontline component of the system is a fleet of health checker clusters, running from EC2 regions. Each health check is performed by multiple checkers from multiple regions, repeated at the configured frequency. On failure, we do not retry; retrying immediately would increase the number of performed checks if a large proportion of the targets assigned to a given checker fail. Instead, we just perform the next check as scheduled normally (30 seconds later, by default).
The checkers communicate the results to the aggregators using a steady stream, asynchronously from the checks themselves. It wouldn’t matter if, for some reason, we’d be making more checks – the stream is configured separately and does not inflate dynamically.
Transmitting incremental changes, as opposed to the complete current state, will result in an increase in the transmission size in the event there is a spike in the number of status changes. So to comply with the goal, we can either make sure that the transmission pipe does not choke when absolutely everything changes, or we can just transmit the full snapshot of the current state all the time. It turns out that, in our case, the latter is feasible by optimizing the transmission formats for batching (we use a few bits per state and we pack densely), so we happily do that.
Aggregators on the DNS hosts process large streams of incoming states – we are talking about millions each second – and are very sensitive to processing time, so we better be sure that neither the content nor the count of the messages impact that processing time. Here, again, we protect ourselves from spikes in the rate of incoming messages by only performing aggregation once a second per batch of relevant messages. For each batch, only the last result sent by a checker is used and those received previously are discarded. This contrasts with running aggregation for every received message and depending on the constancy of that incoming stream. It is also in contrast to queuing up the messages to smoothen out the bursts of incoming requests. Queues are helpful for this purpose, but only help when bursts are not sustained. Ensuring constancy of work regardless of bursts works better.
In addition to coping with potential incoming message bursts, aggregators need to be able to process messages in a well-bounded time. One of the mistakes we made here originally was not testing the performance of one of the code paths handling some specific unexpected message content. Due to another unexpected condition, at one point, one of the checkers sent a large number of those unexpected states to the aggregators. On their own, these states would be harmless, only triggering some diagnostics. However, the side effect was significant processing slowdown in the aggregators because of the poorly performing code path. Fortunately, because of the constant work design, this simply resulted in skipping some messages during processing and only extending the failure detection latency by a couple seconds. Of course, we fixed the poorly performing path after this; it would have been better to catch this in testing.
All together, for each pipeline stage, the cardinality or rate of the input is bounded and the cost of processing of all types of input is uniformly bounded as well. As a result, each stage is doing a constant amount of work and will not fall over or cause delays based on external events – which is exactly the goal.
– Vadim Meleshuk