Increases efficiency, helps with customer satisfaction, and enhances collaboration
What is our primary use case?
The primary use case of Datadog within our organization encompasses providing a comprehensive and sophisticated solution that caters to the diverse needs of our internal customers. We have strategically implemented Datadog to serve as a centralized platform for monitoring, analyzing, and optimizing various aspects of our operations. With a robust suite of functionalities, Datadog empowers us to meet the dynamic requirements of over 40 internal customers efficiently.
Through Datadog, we offer a wide array of services to our internal stakeholders, allowing them to access and leverage its capabilities to enhance performance, troubleshoot issues, and make data-driven decisions. The tool's versatility enables different teams within our organization to monitor and track distinct metrics, such as application performance, infrastructure health, and logs, tailored to their specific requirements.
Moreover, Datadog serves as a pivotal component in our organizational ecosystem by streamlining processes, enhancing collaboration, and fostering a culture of data-driven decision-making. By harnessing the power of Datadog, our internal customers can proactively address issues, optimize resources, and ultimately improve operational efficiency across the board.
In essence, the primary use case of Datadog in our organization revolves around empowering our internal customers with a comprehensive and feature-rich solution that enables them to monitor, analyze, and optimize various aspects of our operations seamlessly and effectively. This strategic implementation of Datadog plays a vital role in enhancing our overall performance, fostering transparency, and driving continuous improvement within our organization.
How has it helped my organization?
Datadog has significantly contributed to enhancing the overall effectiveness and efficiency of our organization through various key improvements. One of the standout benefits has been the accelerated resolution of issues. By leveraging Datadog's monitoring and alerting capabilities, we have been able to swiftly detect, diagnose, and address issues before they escalate, resulting in minimized downtime and enhanced operational continuity.
Moreover, the implementation of Datadog has had a tangible positive impact on customer satisfaction. With improved visibility into our systems and applications, coupled with proactive monitoring and performance optimization, we have been able to deliver a more reliable and seamless experience to our customers. This has translated into higher customer satisfaction scores and strengthened relationships with our stakeholders.
Another notable improvement brought about by Datadog is the streamlining of our toolset. By identifying and removing multiple unused or redundant features and tools, Datadog has helped optimize our workflows and resources. This decluttering of unnecessary functionalities has not only increased operational efficiency yet also streamlined our processes, allowing us to focus on the tools and features that truly add value to our operations.
In summary, Datadog's impact on our organization has been profound, enhancing our ability to resolve issues rapidly, improving customer satisfaction levels, and streamlining our toolset for increased efficiency and focus. These improvements have led to a more robust and resilient operational environment, enabling us to better meet the needs of our internal and external stakeholders.
What is most valuable?
Within our organization, we have found the Agents feature in Datadog to be exceptionally valuable due to its rich set of functionalities and capabilities. The Agents play a crucial role in our monitoring and data collection processes, providing a comprehensive and reliable means to gather crucial performance metrics and insights across our systems and applications.
One of the key reasons why the agents feature stands out as particularly valuable is its versatility. The Agents offer a wide range of monitoring and data collection options, allowing us to capture diverse metrics and performance data with precision. This flexibility enables us to tailor our monitoring strategy to meet the specific needs of different teams and use cases within our organization.
Moreover, the agents feature in Datadog enhances the overall observability of our infrastructure and applications. By deploying Agents strategically across our environment, we can gather real-time metrics, logs, and traces, enabling us to monitor the health, performance, and behavior of our systems comprehensively. This deep level of observability empowers us to proactively identify issues, optimize performance, and make informed decisions based on accurate and timely data.
Furthermore, the agents feature in Datadog plays a pivotal role in driving actionable insights and facilitating efficient troubleshooting. With the detailed data collected by the Agents, we can perform in-depth analysis, detect anomalies, and troubleshoot issues quickly and effectively. This proactive approach to monitoring and analysis ultimately enhances our operational efficiency and resilience.
In essence, the agents feature in Datadog stands out as a valuable asset within our organization due to its robust functionality, versatility, and role in providing comprehensive monitoring and observability capabilities. By leveraging the power of the Agents feature, we can effectively monitor, analyze, and optimize our systems and applications to ensure seamless operations and performance excellence.
What needs improvement?
In assessing areas for potential improvement, one key aspect where Datadog could enhance its service is in the realm of billing CSV reports. Presently, the billing CSV reports provide insights into billing-related information yet are somewhat limited in functionality, typically offering reports with only three columns. Expanding the capabilities of the billing CSV reports to include more detailed and customizable information would greatly benefit users by allowing them to gain a deeper understanding of their usage, costs, and billing trends within Datadog.
Additionally, in considering features for inclusion in the next release of Datadog, the development of more robust and customizable billing CSV reports could be a significant enhancement. By allowing users to tailor their billing reports to specific metrics, timeframes, and parameters of interest, Datadog could provide greater transparency and control over billing data, enabling users to make informed decisions regarding resource allocation, cost optimization, and budget planning.
Moreover, the inclusion of features such as cost forecasting, budget tracking, and customizable alerts related to billing thresholds could further empower users to manage their expenses effectively and proactively monitor and control costs within Datadog. These additions would not only enhance user experience and satisfaction, however, also contribute to a more holistic and actionable approach to financial management within the Datadog platform.
By refining the functionality of billing CSV reports and incorporating advanced features for cost analysis, forecasting, and monitoring, Datadog can elevate its service offering and provide users with enhanced tools for optimizing their usage, expenses, and financial oversight within the platform.
For how long have I used the solution?
I've used the solution for over three years.
What do I think about the scalability of the solution?
Datadog is easy to scale. However, it's scaled for price, so be sure to measure what you need and not push all logs to the solution, or your price will skyrocket quickly.
Which solution did I use previously and why did I switch?
We use multiple APM tools to have both price and value correlations relevant to the teams using them.
What's my experience with pricing, setup cost, and licensing?
Request a test account during the POC phase to determine if the tool is the right fit; all providers do that for free.
Which other solutions did I evaluate?
We did POC with over five products. I can't name them due to the related NDA.
Which deployment model are you using for this solution?
Public Cloud
Easy, more reliable, and transparent monitoring
What is our primary use case?
We use the solution to monitor and investigate issues with production services at work. We're periodically reviewing the service catalog view for the various applications and I use it to identify any anomalies with service metrics, any changes in user behavior evident via API calls, and/or spikes in errors.
We use monitors to trigger alerts for on-call engineers to act upon. The monitors have set thresholds for request latency, error rates, and throughput.
We also use automated rules to block bad actors based on request volume or patterns.
How has it helped my organization?
Datadog has made setting up monitors easier, more reliable, and more transparent. This has helped standardize our on-call process and set all of our on-call engineers up for success.
It has also standardized the way we evaluate issues with our applications by encouraging all teams to use the service catalog.
It makes it easier for our platforms and QA teams to get other engineering teams up to speed with managing their own applications' performance.
Overall, Datadog has been very helpful for us.
What is most valuable?
The service catalog view is very helpful for periodic reviews of our application. It has also standardized the way we evaluate issues with our applications. Having one page with an easy-to-scan view of app metrics, error patterns, package vulnerabilities, etc., is very helpful and reduces friction for our full-stack engineers.
Monitors have also been very valuable when setting up our on-call processes. It makes it easy to set up and adjust alerting to keep our teams aware of anything going wrong.
What needs improvement?
Datadog is great overall. One thing to improve would be making it easier to see common patterns across traces. I sometimes end up in a trace but have a hard time finding other common features about the error/requests that are similar to that trace. This could be easier to get to; however, in that case, it's actually an education issue.
Another thing that could be improved is the service list page sometimes refreshes slowly, and I accidentally click the wrong environment since the sort changes late.
For how long have I used the solution?
I've used the solution for about a year.
What do I think about the stability of the solution?
It is very stable. I have not seen any issues with Datadog.
What do I think about the scalability of the solution?
How are customer service and support?
I've had no specific experience with technical support.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We used Honeycomb before. We switched since Datadog offered more tooling.
How was the initial setup?
Each application has been easy to instrument.
What about the implementation team?
We implemented the solution in-house.
What was our ROI?
Engineers save an unquantifiable amount of time by having one standard view for all applications and monitors.
What's my experience with pricing, setup cost, and licensing?
I am not exposed to this aspect of Datadog.
Which other solutions did I evaluate?
We did not evaluate other options.
Which deployment model are you using for this solution?
Public Cloud
Very good custom metrics, dashboards, and alerts
What is our primary use case?
Our primary use case for Datadog involves utilizing its dashboards, monitors, and alerts to monitor several key components of our infrastructure.
We track the performance of AWS-managed Airflow pipelines, focusing on metrics like data freshness, data volume, pipeline success rates, and overall performance.
In addition, we monitor Looker dashboard performance to ensure data is processed efficiently. Database performance is also closely tracked, allowing us to address any potential issues proactively. This setup provides comprehensive observability and ensures that our systems operate smoothly.
How has it helped my organization?
Datadog has significantly improved our organization by providing a centralized platform to monitor all our key metrics across various systems. This unified observability has streamlined our ability to oversee infrastructure, applications, and databases from a single location.
Furthermore, the ability to set custom alerts has been invaluable, allowing us to receive real-time notifications when any system degradation occurs. This proactive monitoring has enhanced our ability to respond swiftly to issues, reducing downtime and improving overall system reliability. As a result, Datadog has contributed to increased operational efficiency and minimized potential risks to our services.
What is most valuable?
The most valuable features we’ve found in Datadog are its custom metrics, dashboards, and alerts. The ability to create custom metrics allows us to track specific performance indicators that are critical to our operations, giving us greater control and insights into system behavior.
The dashboards provide a comprehensive and visually intuitive way to monitor all our key data points in real-time, making it easier to spot trends and potential issues. Additionally, the alerting system ensures we are promptly notified of any system anomalies or degradations, enabling us to take immediate action to prevent downtime.
Beyond the product features, Datadog’s customer support has been incredibly timely and helpful, resolving any issues quickly and ensuring minimal disruption to our workflow. This combination of features and support has made Datadog an essential tool in our environment.
What needs improvement?
One key improvement we would like to see in a future Datadog release is the inclusion of certain metrics that are currently unavailable. Specifically, the ability to monitor CPU and memory utilization of AWS-managed Airflow workers, schedulers, and web servers would be highly beneficial for our organization. These metrics are critical for understanding the performance and resource usage of our Airflow infrastructure, and having them directly in Datadog would provide a more comprehensive view of our system’s health. This would enable us to diagnose issues faster, optimize resource allocation, and improve overall system performance. Including these metrics in Datadog would greatly enhance its utility for teams working with AWS-managed Airflow.
For how long have I used the solution?
I've used the solution for four months.
What do I think about the stability of the solution?
The stability of Datadog has been excellent. We have not encountered any significant issues so far.
The platform performs reliably, and we have experienced minimal disruptions or downtime. This stability has been crucial for maintaining consistent monitoring and ensuring that our observability needs are met without interruption.
What do I think about the scalability of the solution?
Datadog is generally scalable, allowing us to handle and display thousands of custom metrics efficiently. However, we’ve encountered some limitations in the table visualization view, particularly when working with around 10,000 data points. In those cases, the search functionality doesn’t always return all valid results, which can hinder detailed analysis.
How are customer service and support?
Datadog's customer support plays a crucial role in easing the initial setup process. Their team is proactive in assisting with metric configuration, providing valuable examples, and helping us navigate the setup challenges effectively. This support significantly mitigates the complexity of the initial setup.
Which solution did I use previously and why did I switch?
We used New Relic before.
How was the initial setup?
The initial setup of Datadog can be somewhat complex, primarily due to the learning curve associated with configuring each metric field correctly for optimal data visualization. It often requires careful attention to detail and a good understanding of each option to achieve the desired graphs and insights
What about the implementation team?
We implemented the solution in-house.
Good centralized pipeline tracking and error logging with very good performance
What is our primary use case?
Our primary use case is custom and vendor-supplied web application log aggregation, performance tracing and alerting.
We run a mix of AWS EC2, Azure serverless, and colocated VMWare servers to support higher education web applications.
Managing a hybrid multi-cloud solution across hundreds of applications is always a challenge.
Datadog agents on each web host and native integrations with GitHub, AWS, and Azure get all of our instrumentation and error data in one place for easy analysis and monitoring.
How has it helped my organization?
Using Datadog across all of our apps, we were able to consolidate a number of alerting and error-tracking apps, and Datadog ties them all together in cohesive dashboards.
Whether the app is vendor-supplied or we built it ourselves, the depth of tracing, profiling, and hooking into logs is all obtainable and tunable. Both legacy .NET Framework and Windows Event Viewer and cutting-edge .NET Core with streaming logs all work.
The breadth of coverage for any app type or situation is really incredible. It feels like there's nothing we can't monitor.
What is most valuable?
When it comes to Datadog, several features have proven particularly valuable. For example, the centralized pipeline tracking and error logging provide a comprehensive view of our development and deployment processes, making it much easier to identify and resolve issues quickly.
Synthetic testing has been a game-changer, allowing us to catch potential problems before they impact real users.
Real user monitoring gives us invaluable insights into actual user experiences, helping us prioritize improvements where they matter most. And the ability to create custom dashboards has been incredibly useful, allowing us to visualize key metrics and KPIs in a way that makes sense for different teams and stakeholders.
Together, these features form a powerful toolkit that helps us maintain high performance and reliability across our applications and infrastructure, ultimately leading to better user satisfaction and more efficient operations.
What needs improvement?
They need an expansion of the Android and IOS apps to provide a simplified CI/CD pipeline history view.
I like the idea of monitoring on the go. That said, it seems the options are still a bit limited out of the box.
While the documentation is very good considering all the frameworks and technology Datadog covers, there are areas - specifically .NET Profiling and Tracing of IIS hosted apps - that need a lot of focus to pick up on the key details needed.
In some cases the screenshots don't match the text as updates are made. I spent longer than I should figuring out how to correlate logs to traces, mostly related to environmental variables.
For how long have I used the solution?
I've used the solution for about three years.
What do I think about the stability of the solution?
We have been impressed with the uptime and clean and light resource usage of the agents.
What do I think about the scalability of the solution?
The solution has been very scalable and very customizable.
How are customer service and support?
Support is always helpful to help us tune our committed costs and alert us when we start spending out of the on-demand budget.
Which solution did I use previously and why did I switch?
We used a mix of a custom error email system, SolarWinds, UptimeRobot, and GitHub actions. We switched to find one platform that could give deep app visibility regardless of Linux or Windows or Container, cloud or on-prem hosted.
How was the initial setup?
The implementation is generally simple. That said, .NET Profiling of IIS and aligning logs to traces and profiles was a challenge.
What about the implementation team?
The solution was implemented in-house.
What was our ROI?
Our ROI has been significant time saved by the development team assessing bugs and performance issues.
What's my experience with pricing, setup cost, and licensing?
Set up live trials to asses cost scaling. Small decisions around how monitors are used can impact cost scaling.
Which other solutions did I evaluate?
NewRelic was considered. LogicMonitor was chosen over Datadog for our network and campus server management use cases.
What other advice do I have?
We are excited to explore the new offerings around LLM further and continue to expand our presence in Datadog.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Microsoft Azure
Consolidates alerts, offers comprehensive views, and has synthetic testing
What is our primary use case?
Our primary use case is custom and vendor-supplied web application log aggregation, performance tracing and alerting.
We run a mix of AWS EC2, Azure serverless, and colocated VMWare servers to support higher education web applications.
We're managing a hybrid multi-cloud solution across hundreds of applications, which is always a challenge. There are Datadog agents on each web host, and native integrations with GitHub, AWS, and Azure and that gets all of our instrumentation and error data in one place for easy analysis and monitoring.
How has it helped my organization?
Through the use of Datadog across all of our apps, we were able to consolidate a number of alerting and error-tracking apps, and Datadog ties them all together in cohesive dashboards. Whether the app is vendor-supplied or we built it ourselves, the depth of tracing, profiling, and hooking into logs is all obtainable and tunable. Both legacy .NET Framework and Windows Event Viewer and cutting-edge .NET Core with streaming logs all work. The breadth of coverage for any app type or situation is really incredible. It feels like there's nothing we can't monitor.
What is most valuable?
When it comes to Datadog, several features have proven particularly valuable.
The centralized pipeline tracking and error logging provide a comprehensive view of our development and deployment processes, making it much easier to identify and resolve issues quickly.
Synthetic testing has been a game-changer, allowing us to catch potential problems before they impact real users. Real user monitoring gives us invaluable insights into actual user experiences, helping us prioritize improvements where they matter most. And the ability to create custom dashboards has been incredibly useful, allowing us to visualize key metrics and KPIs in a way that makes sense for different teams and stakeholders.
Together, these features form a powerful toolkit that helps us maintain high performance and reliability across our applications and infrastructure, ultimately leading to better user satisfaction and more efficient operations.
What needs improvement?
I'd like to see an expansion of the Android and IOS apps to have a simplified CI/CD pipeline history view.
I like the idea of monitoring on the go, however, it seems the options are still a bit limited out of the box. While the documentation is very good considering all the frameworks and technology Datadog covers, there are areas - specifically .NET Profiling and Tracing of IIS-hosted apps - that need a lot of focus to pick up on the key details needed.
Sometimes, the screenshots don't match the text as updates are made. I spent longer than I should have figured out how to correlate logs to traces, mostly related to environmental variables.
For how long have I used the solution?
I've used the solution for about three years.
What do I think about the stability of the solution?
We have been impressed with the uptime and clean and light resource usage of the agents.
What do I think about the scalability of the solution?
The product is very scalable and very customizable.
How are customer service and support?
Technical support is always helpful to help us tune our committed costs and alert us when we start spending out of the on-demand budget.
Which solution did I use previously and why did I switch?
We used a mix of a custom error email system, SolarWinds, UptimeRobot, and GitHub actions. We switched to find one platform that could give deep app visibility regardless of Linux or Windows or Container, cloud or on-prem hosted.
How was the initial setup?
The setup is generally simple. .NET Profiling of IIS and aligning logs to traces and profiles was a challenge.
What about the implementation team?
We implemented the solution in-house.
What was our ROI?
ROI is reflected in in significant time saved by the development team assessing bugs and performance issues.
What's my experience with pricing, setup cost, and licensing?
Set up live trials to asses cost scaling. Small decisions around how monitors are used can impact cost scaling.
Which other solutions did I evaluate?
NewRelic was considered. LogicMonitor was chosen over Datadog for our network and campus server management use cases.
What other advice do I have?
We're excited to explore the new offerings around LLM further and continue to expand our presence in Datadog.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Microsoft Azure
Good synthetic testing, centralized pipeline tracking and error logging
What is our primary use case?
Our primary use case is custom and vendor-supplied web application log aggregation, performance tracing and alerting.
We run a mix of AWS EC2, Azure serverless, and colocated VMWare servers to support higher education web applications.
Managing a hybrid multi-cloud solution across hundreds of applications is always a challenge. Datadog agents on each web host and native integrations with GitHub, AWS, and Azure get all of our instrumentation and error data in one place for easy analysis and monitoring.
How has it helped my organization?
Through the use of Datadog across all of our apps, we were able to consolidate a number of alerting and error-tracking apps, and Datadog ties them all together in cohesive dashboards. Whether the app is vendor-supplied or we built it ourselves, the depth of tracing, profiling, and hooking into logs is all obtainable and tunable. Both legacy .NET Framework and Windows Event Viewer and cutting-edge .NET Core with streaming logs all work. The breadth of coverage for any app type or situation is really incredible. It feels like there's nothing we can't monitor.
What is most valuable?
When it comes to Datadog, several features have proven particularly valuable.
The centralized pipeline tracking and error logging provide a comprehensive view of our development and deployment processes, making it much easier to identify and resolve issues quickly.
Synthetic testing has been a game-changer, allowing us to catch potential problems before they impact real users. Real user monitoring gives us invaluable insights into actual user experiences, helping us prioritize improvements where they matter most. And the ability to create custom dashboards has been incredibly useful, allowing us to visualize key metrics and KPIs in a way that makes sense for different teams and stakeholders.
Together, these features form a powerful toolkit that helps us maintain high performance and reliability across our applications and infrastructure, ultimately leading to better user satisfaction and more efficient operations.
What needs improvement?
I'd like to see an expansion of the Android and IOS apps to have a simplified CI/CD pipeline history view. I like the idea of monitoring on the go, however, it seems the options are still a bit limited out of the box.
While the documentation is very good considering all the frameworks and technology Datadog covers, there are areas - specifically .NET Profiling and Tracing of IIS-hosted apps - that need a lot of focus to pick up on the key details needed. In some cases the screenshots don't match the text as updates are made. I feel I spent longer than I should figuring out how to correlate logs to traces, mostly related to environmental variables.
For how long have I used the solution?
I've used the solution for about three years.
What do I think about the stability of the solution?
We have been impressed with the uptime and clean and light resource usage of the agents.
What do I think about the scalability of the solution?
The solution was very scalable and very customizable.
How are customer service and support?
Sales service is always helpful in tuning our committed costs and alerting us when we start spending outside the on-demand budget.
Which solution did I use previously and why did I switch?
We used a mix of a custom error email system, SolarWinds, UptimeRobot, and GitHub actions. We switched to find one platform that could give deep app visibility regardless of Linux, Windows, Container, cloud or on-prem hosted.
How was the initial setup?
The setup is generally simple. That said, .NET Profiling of IIS and aligning logs to traces and profiles was a challenge.
What about the implementation team?
The solution was iImplemented in-house.
What was our ROI?
I'd count our ROI as significant time saved by the development team assessing bugs and performance issues.
What's my experience with pricing, setup cost, and licensing?
It's a good idea to set up live trials to asses cost scaling. Small decisions around how monitors are used can have big impacts on cost scaling.
Which other solutions did I evaluate?
NewRelic was considered. LogicMonitor was chosen over Datadog for our network and campus server management use cases.
What other advice do I have?
We are excited to dig further into the new offerings around LLM and continue to grow our footprint in Datadog.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Microsoft Azure
Easy dashboard creation and alarm monitoring with a good ROI
What is our primary use case?
We use the solution to monitor production service uptime/downtime, latency, and log storage.
Our entire monitoring infrastructure runs off Datadog, so all our alarms are configured with it. We also use it for tracing API performance; what are the biggest regression points.
Finally we use it to compare performance on SEO metrics vs competitors. This is a primary use case as SEO dictates our position from google traffic which is a large portion of our customer view generation so it is a vital part of the business we rely on datadog for.
How has it helped my organization?
The product improved the organization primarily by providing consistent data with virtually zero downtime. This was a problem we had with an old provider. It also made it easy to transition an otherwise massive migration involving hundreds of alarms.
The training provided was crucial, along with having a dedicated team that can forward our requests to and from Datadog efficiently. Without that, we may have never transitioned to Datadog in the first place since it is always hard to lead a migration for an entire company.
What is most valuable?
The API tracing has been massive for debugging latency regressions and how to improve the performance of our least performant APIs. Through tracing, we managed to find the slowest step of an API, improve its latency, and iterate on the process until we had our desired timings. This is important for improving our SEO as LCP, INP are directly taking from the numbers we see on Datadog for our API timings.
The ease of dashboard creation and alarm monitoring has helped us not only stay competitive but be industry leaders in performance.
What needs improvement?
The product can be improved by allowing the grouping of APIs to add variables. That way, any API with a unique ID could be grouped together.
Furthermore, SEO monitoring has been crucial for us but also a difficult part to set up as comparing alarms between us and competitors is a tough feat. Data is not always consistent so we have been toying and experimenting with removing the noise of datadog but its been taking a while.
Finally, Datadog should have a feature that reports stale alarms based on activity.
For how long have I used the solution?
I've used the solution for six months.
What do I think about the stability of the solution?
Its very stable and we have not experienced an issue with downtime on Datadog.
What do I think about the scalability of the solution?
Datadog works well for scalability as volume has not seemed to slow.
How are customer service and support?
We haven't talked to the support team.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We switched to Datadog as we used to have a provider that had very inconsistent logging. Our alarms would often not fire since our services were not working since the provider had a logging problem.
How was the initial setup?
The initial setup was somewhat complex due to the built-in monitoring with services. This is not always super comprehensive and has to be studied as opposed to other metrics platforms that just service all your endpoints, which you can trace them with Grafana.
What about the implementation team?
We implemented the solution through an in-house team.
What was our ROI?
What's my experience with pricing, setup cost, and licensing?
Users must try to understand the way Datadog alarms work off the bat so that they can minimize the requirements for expensive features like custom metrics.
It can sometimes be tempting to use them; however, it is not always necessary as you migrate to Datalog, as they are a provider that treats alarms somewhat differently than you may be used to.
Which other solutions did I evaluate?
We have evaluated New Relic, Grafana, Splunk, and many more in our quest to find the best monitoring provider.
Which deployment model are you using for this solution?
Hybrid Cloud
A great tool with an easy setup and helpful error logs
What is our primary use case?
We currently have an error monitor to monitor errors on our prod environment. Once we hit a certain threshold, we get an alert on Slack. This helps address issues the moment they happen before our users notice.
We also utilize synthetic tests on many pages on our site. They're easy to set up and are great for pinpointing when a bug is shipped, but they may take down a less visited page that we aren't immediately aware of. It's a great extra check to make sure the code we ship is free of bugs.
How has it helped my organization?
The synthetic tests have been invaluable. We use them to check various pages and ensure functionality across multiple areas. Furthermore, our error monitoring alerts have been crucial in letting us know of problems the moment they pop up.
Datadog has been a great tool, and all of our teams utilize many of its features. We have regular mob sessions where we look at our Datadog error logs and see what we can address as a team. It's been great at providing more insight into our users and logging errors that can be fixed.
What is most valuable?
The error logs have been super helpful in breaking down issues affecting our users. Our monitors let us know once we hit a certain threshold as well, which is good for momentary blips and issues with third-party providers or rollouts that we have in the works. Just last week, we had a roll-out where various features were broken due to a change in our backend API. Our Datadog logs instantly notified us of the issues, and we could troubleshoot everything much more easily than just testing blind. This was crucial to a successful rollout.
What needs improvement?
I honestly can't think of anything that can be improved. We've started using more and more features from our Datadog account and are really grateful for all of the different ways we can track and monitor our site.
We did have an issue where a synthetic test was set up before the holiday break, and we were quickly charged a great amount. Our team worked with Datadog, and they were able to help us out since it was inadvertent on our end and was a user error. That was greatly appreciated and something that helped start our relationship with the Datadog team.
For how long have I used the solution?
We've been using Datadog for several months. We started with the synthetic tests and now use It for error handling and in many other ways.
What do I think about the stability of the solution?
Stability has been great. We've had no issues so far.
What do I think about the scalability of the solution?
The solution is very easy to scale. We've used it on multiple clients.
How are customer service and support?
We had a dev who had set up a synthetic test that was running every five minutes in every single region over the holiday break last year. The Datadog team was great and very understanding and we were able to work this out with them.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We didn't have any previous solution. At a previous company, I've used Sentry. However, I also find Datadog to be much easier, plus the inclusion of synthetic tests is awesome.
How was the initial setup?
The documentation was great and our setup was easy.
What about the implementation team?
We implemented the solution in-house.
What was our ROI?
This has had a great ROI as we've been able to address critical bugs that have been found via our Datadog tools.
What's my experience with pricing, setup cost, and licensing?
The setup cost was minimal. The documentation is great and the product is very easy to set up.
Which other solutions did I evaluate?
We also looked at other providers and settled on Datadog. It's been great to use across all our clients.
Which deployment model are you using for this solution?
Private Cloud
Unified platform with customizable dashboards and AI-driven insights
What is our primary use case?
Our primary use case for this solution is comprehensive cloud monitoring across our entire infrastructure and application stack.
We operate in a multi-cloud environment, utilizing services from AWS, Azure, and Google Cloud Platform.
Our applications are predominantly containerized and run on Kubernetes clusters. We have a microservices architecture with dozens of services communicating via REST APIs and message queues.
The solution helps us monitor the performance, availability, and resource utilization of our cloud resources, databases, application servers, and front-end applications.
It's essential for maintaining high availability, optimizing costs, and ensuring a smooth user experience for our global customer base. We particularly rely on it for real-time monitoring, alerting, and troubleshooting of production issues.
How has it helped my organization?
Datadog has significantly improved our organization by providing us with great visibility across the entire application stack. This enhanced observability has allowed us to detect and resolve issues faster, often before they impact our end-users.
The unified platform has streamlined our monitoring processes, replacing several disparate tools we previously used. This consolidation has improved team collaboration and reduced context-switching for our DevOps engineers.
The customizable dashboards have made it easier to share relevant metrics with different stakeholders, from developers to C-level executives. We've seen a marked decrease in our mean time to resolution (MTTR) for incidents, and the historical data has been invaluable for capacity planning and performance optimization.
Additionally, the AI-driven insights have helped us proactively identify potential issues and optimize our infrastructure costs.
What is most valuable?
We've found the Application Performance Monitoring (APM) feature to be the most valuable, as it provides great visibility on trace-level data. This granular insight allows us to pinpoint performance bottlenecks and optimize our code more effectively.
The distributed tracing capability has been particularly useful in our microservices environment, helping us understand the flow of requests across different services and identify latency issues.
Additionally, the log management and analytics features have greatly improved our ability to troubleshoot issues by correlating logs with metrics and traces.
The infrastructure monitoring capabilities, especially for our Kubernetes clusters, have helped us optimize resource allocation and reduce costs.
What needs improvement?
While Datadog is an excellent monitoring solution, it could be improved by building more features to replace alerting apps like OpsGenie and PagerDuty. Specifically, we'd like to see more advanced incident management capabilities integrated directly into the platform. This could include features like sophisticated on-call scheduling, escalation policies, and incident response workflows.
Additionally, we'd appreciate more customizable machine learning-driven anomaly detection to help us identify unusual patterns more accurately. Improved support for serverless architectures, particularly for monitoring and tracing AWS Lambda functions, would be beneficial.
Enhanced security monitoring and threat detection capabilities would also be valuable, potentially reducing our reliance on separate security information and event management (SIEM) tools.
For how long have I used the solution?
I've used the solution for two years.