Efficiently find and fix problems, improve application health, and deliver better customer experiences
Three foundational observability signals are metrics, logs (semi-structured data), and traces (flows of requests from beginning to end across all dependencies). These signals are the output of monitored environments, like containers, microservices, and applications. The goal is to provide an integrated experience for DevOps and Site Reliability Engineers to isolate critical events and use all the observability signals to isolate issues to containerized applications and microservices running anywhere. Amazon OpenSearch Service combines log and trace data analytics into a single solution.
Observability operations
Amazon OpenSearch Service provides new capabilities to help solve your observability problems.
Features
Use open interfaces to collect, route, and transform telemetry data (including OpenTelemetry, Fluentd, Fluent Bit, Logstash, Data Prepper, and more). You can search and analyze large amounts of semi-structured data with native capabilities. You can visualize, monitor, and alert with anomaly detection observability features of OpenSearch Dashboards, and conduct interactive analysis and visualization on data with Piped Processing Language (PPL), a query interface.
Collect
First, you need to collect data for analysis. Collection includes gathering, enriching, filtering, transforming, and normalizing data from multiple sources.
Detect
Often customers don’t detect issues as soon as they began, there is often a lag from when an issue starts and when you are notified. You want to reduce this as much as possible. Detection should be proactive and multi-faceted (such as alarms on telemetry). Anomaly detection is a key tool, as well as the ability to link together related alarms to reduce alarm fatigue. A core component of detection is also visualization and monitoring, which Amazon OpenSearch Service does with a component called OpenSearch Dashboards. You can even interactively analyze the data with tools like PPL.
Investigate
Investigation is where people spend the most amount of time during an operational event—and the investigation usually takes multiple people. This is the largest contributor to Mean Time to Incident (MTTI) and Mean Time to Recovery (MTTR). Cutting through the chaos and understanding what to focus on remains a difficult task. Use logs, metrics, and tracing to help you quickly conduct root cause analysis—while correlating across metrics, logs, and traces—on AWS, on premises, or on other clouds. Collaborate on the investigations and document your analysis with OpenSearch Dashboard notebooks.
Remediate
After you identify the cause of a failure, you need to remediate it. There is nothing worse than trying to fix something and making the situation worse. Don’t forget to do a post-event analysis to determine how you could have prevented the failure in the first place. Document proposed changes so you can prevent the issue from recurring. Your goal should be to ensure the same issue never happens again—but if it does, that you can identify and remediate it automatically.
Application Performance Monitoring
Sometimes Application Performance Monitoring (APM) is the first maturity level of observability. But APM alone is not enough. Is your application actually performing as expected, even if your application monitoring dashboard is all green? Are your customers getting the user experience they need? What’s the usage of your application? Which parts of your application are hitting scale limits? From which geographic region are you seeing the biggest growth? Which trends can you visualize and plan for? If you could gather metrics, you could have confidence that when you deploy new code or change your infrastructure, you can see the impact of these changes. Observability advances APM to answer these additional questions.