The problem detection and problem lifecycle presentation in Dynatrace is excellent. For example, when traffic increases from ten requests per minute to fifty requests per minute or even one thousand to twenty thousand requests per minute, I can see how the services' response time changes according to the traffic change and what the CPU time is for that particular request or API. This gives insight into all the components responsible for response time.
Another valuable feature is the ability to compare performance data from one week earlier or from the same time window on a previous day. This performance comparison capability is very helpful.
The ease of use is remarkable. While we have similar compute monitoring on Azure side such as Azure Monitoring and Azure App Services with compute utilization, Dynatrace links compute with services and services with code and other components. This makes problem management, alert management, and identifying what needs to be fixed for the development team much easier.
Through Dynatrace feedback, we were able to figure out bottlenecks, plan performance user stories, optimize execution time, and make numerous corrections. The problem lifecycle management is improved, and MTTR is very positive. For MTTD, or mean time to detect, it was previously taking hours and hours, but after implementing Dynatrace, even the L1 team can help detect and pinpoint bottlenecks. Not only has the time decreased, but the accuracy and quality of monitoring have improved significantly.
Problem detection has improved very positively, and the quality of monitoring has improved with the help of Dynatrace. We are saving time, saving workforce, and visibility has increased. Both technical and non-technical teams have greater visibility into how the application is performing.