
Application Performance - Measuring performance
Measuring performance is the starting point to identify opportunities for improving the speed of web applications.
- Send a significant portion of your traffic (20% at minima) to each CDN, to measure the actual performance of the CDN at scale. CDNs like CloudFront show better performance at scale when warmed (e.g. cache populated, TCP connection pool warmed, and DNS entries cached by ISPs)
- Conduct the benchmarking testing in same conditions for all CDNs. Send the similar amount of traffic to each of your CDN, in the same time, and to the same user base.
- Conduct the testing for a relevant amount of time. For example, If you deliver long tail content, CDNs might need days to populate their cache and ramp up their Cache Hit Ratio to stable state.
- Configure CDNs with the same capabilities, such as compression, image optimization, protocols (HTTP/3 vs H2, TLS1.3 vs TLS1.2, IPv6 vs IPv4), origin acceleration (Origin Shield), etc..
- Are CDNs tested in the same conditions? similar configurations and optimizations.
- How much data points are collected for every dimension? More data points leads to more accurate benchmarking (warming CDNs, statistical errors, etc..)
- Do the tested objects have the same characteristics of your application (e.g. Static vs Dynamic, small object vs large object, popular objects vs non popular objects etc...)
- Some of these tools offer the possibility to do RUM testing on your own CDNs. Consider this option, as it will better reflect the performance that your users will experience with your CDNs.
- Use the most appropriate combination of performance metrics (latency, throughput, P50 vs P90) and availability that is relevant to your application. For example, large object downloads are more sensitive to throughput than to latency.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS.