Great logging, session replays, and alerting
What is our primary use case?
Our primary use cases include:
- Alert on errors customers encounter in our product. We've set up logs that go to slack to tell us when a certain error threshold is hit.
- Investigate slow page load times. We have pages in our app that are loading slowly and the logs help us figure out which queries are taking the longest time.
- Metrics. We collect metrics on product usage.
- Session replays. We watch session replays to see what a user was doing when a page took a long time to load or hit an error. This is helpful.
How has it helped my organization?
It's helped us find bugs that customers are experiencing before they're reported to us. Sometimes, customers don't report errors, so being able to catch errors before they're reported helps us investigate before other users find errors
Datadog has helped us investigate slow page loading times and even see the specific queries that are taking a long time to load
Logging lets us see the context around an error. For example, see if a backend service had an error before it surfaced on the frontend.
Dashboards are helpful for reviewing occasionally to get a higher-level overview of what's happening.
What is most valuable?
The most valuable aspects include:
- Logging. Being able to view detailed logs helps debug issues.
- Session replays. They are helpful for seeing what a customer was doing before they saw an error or had a slow page load
- Alerting. This is an important part of our on-call process to send alerts to slack when an error threshold is crossed. Alerts/monitors are easy to configure to only alert when we want them to alert.
- Dashboards. It's helpful to pull up dashboards that show our most common errors or page performance. It's a good way to see how the app is performing from a birds-eye-view.
What needs improvement?
The UI has a lot going on. It should be simpler and have a better way to onboard someone new to using Datadog.
The log querying syntax can be confusing. Usually, I filter by finding a facet in a log and selecting to filter by that facet - but I'm not sure how to write the filter myself
The monitor/alert syntax is also somewhat hard to understand.
Overall, it should be easier to learn how to use the product while you're using the product. Perhaps tooltips or a link to learn more about whatever section you're using.
For how long have I used the solution?
I've used the solution for two years.
Which solution did I use previously and why did I switch?
We did not previously use a different solution.
Which other solutions did I evaluate?
We did not evaluate other options.
Lots of features with a rapid log search and an easy setup process
What is our primary use case?
We use the solution for logs, infrastructure metrics, and APM. We have many different teams using it across both product and data engineering.
How has it helped my organization?
The solution has improved our observability by giving us rapid log search, a correlation between hosts/logs/APM, and tons of features in one website.
What is most valuable?
I enjoy the rapid log search. It's such a pleasure to quickly find what you're looking for. The ease of graph building is also nice, and MUCH easier than Prometheus.
What needs improvement?
It is far too easy to run up huge unexpected costs. The billing model is not flexible enough to handle cases where you temporarily have thousands of nodes. It is not price effective for monitoring big data jobs. We had to switch to open-source Grafana plus Prometheus for those.
It would be cool to have an open telemetry agent that automatically APM instruments everything in the next release.
For how long have I used the solution?
I've used the solution for three years.
What do I think about the stability of the solution?
I'd rate the stability ten out of ten.
What do I think about the scalability of the solution?
I'd rate the scalability ten out of ten.
Which solution did I use previously and why did I switch?
We did not previously use a different solution.
How was the initial setup?
The setup is very straightforward. Users just install the helm chart, and boom, you're done.
What about the implementation team?
We handled the setup in-house.
What's my experience with pricing, setup cost, and licensing?
Be careful about pricing. Make sure you understand the billing model and that there are multiple billing models available. Set up alarms to alert you of cost overruns before they get too bad.
Which other solutions did I evaluate?
We've never evaluated other solutions.
What other advice do I have?
It's a great product. However, you have to pay for quality.
Which deployment model are you using for this solution?
Public Cloud
Great dashboards, lots of integrations, and heps trace data between components
What is our primary use case?
We use the product for instrumentation, observability, monitoring, and alerting of our system.
We have multiple environments and a variety of pieces of infrastructure including servers, databases, load balancers, cache, etc. and we need to be able to monitor all of these pieces, while also retaining visibility into how the various pieces interact with each other.
Tracing data between components and user interactions that trigger these data flows is particularly important for understanding where problems arise and how to resolve them quickly.
How has it helped my organization?
It provides a lot of options for integrations and tooling to observe what is happening within the system, making diagnosis and triage easier/faster.
Each user can set up their own dashboards and share them with other users on the team. We can instrument monitors based on various patterns that we care about, then notify us when an event triggers an alert with platforms such as Slack or PagerDuty.
Our ability to rapidly become aware of problems focused on the symptoms being observed and entry points into the tool to rapidly identify where to investigate further is important for our team and our users.
What is most valuable?
The most valuable aspects of the solution include log search to help triage specific problems that we get notified about (whether by alerts we have configured or users that have contacted us), APM traces (to view how user interactions trace through the various layers of our infrastructure and services to be able to reproduce and identify the source of problems), general performance/system dashboards (to regularly monitor for stability or deviation), and alerting (to be automatically informed when a problem occurs). We also use the incident tools for tracking production incidents.
What needs improvement?
In some ways, the tool has a pretty steep learning curve. Discovering the various capabilities available, then learning how to utilize them for particular use cases can be challenging. Thankfully, there is a good amount of documentation with some good examples (more are always welcome), and support is very helpful.
While DataDog has started adding more correlation mapping between services and parts of our system, it is still tricky to understand what is the ultimate root cause when multiple views/components spike. Additionally, there are lots of views and insights that are available but hard to find or discover. Some of the best ways to discover is to just click around a lot and get familiar with views that are useful, but that takes time and isn't ideal when in the middle of fighting a fire.
For how long have I used the solution?
I've used the solution for about four years.
What do I think about the stability of the solution?
What do I think about the scalability of the solution?
It seems to scale well. Performance for aggregating or searching is usually very fast.
How are customer service and support?
Technical support is helpful and pretty responsive.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We did not use a different solution.
What was our ROI?
It's hard to say what ROI would be as I have not managed our system without it to compare to.
What's my experience with pricing, setup cost, and licensing?
I don't manage licensing.
Which other solutions did I evaluate?
We did not evaluate other options.
What other advice do I have?
It's a great tool with new features and improvements continuously being added. It is not simple to use or set up, however, if you have the right personnel, you can get a lot of value from what DataDog has to offer.
Which deployment model are you using for this solution?
Public Cloud
Prompt support with good logging and helps with standardization
What is our primary use case?
Internally our primary usage of Datadog pertains around APM/tracing, logging, RUM (real user monitoring), synthetic testing of service/application health and state, overall general monitoring + observability, and custom dashboards for aggregate observability. We also are more frequently leveraging the more recent service catalog feature.
We have several microservices, several databases, and a few web applications (both external and internal facing), and all of these within our systems are contained within several environments ranging from dev, sit, eat, and production.
How has it helped my organization?
Datadog has had a massive impact on our department. Before, we had loose logging dumped into a sea of GCP logs with haphazard custom solutions for traceability between logs and network calls. Datadog has helped standardize and normalize our processes around observability while providing fantastic tools for aggregating insight around what is monitored regularly, all wrapped in an easy-to-use UI.
Additionally, a range of types of users exist within our department, each with its own positive impact on Datadog. DevOps leverages it to easily manage infra, developers leverage it to easily monitor/debug services and applications, and business leverages it for statistics.
What is most valuable?
Personally I've found the RUM (real user monitoring) to be above and beyond what I've worked with before. Client-side monitoring has always been on the short end of the stick but the information collected and ease of instrumentation provided by Datadog is second to none.
Having a live dynamic service map is also one of my favourite features; it provides real-time insights into which services/applications are connected to which.
We are also investigating the new API catalog feature set, which I believe will provide a high-value impact for real-time documentation and information about all of our shared microservices that other dev teams can use.
What needs improvement?
In production, we intend to use trace IDs generated by RUM to attach to support tickets when a user experiences a traceable network error, and we want to display this trace ID to the user so if they were to contact us about a specific issue, they can provide us an exact ID displayed to them back to us. Currently, this is not possible out-of-the-box client-side without inventing our own solution for capturing these trace IDs, such as shimming the native fetch or returning the ID from the service response.
For how long have I used the solution?
I've used the solution for approximately two years across our department and around a year or so of it being used practically and fully integrated into our systems.
What do I think about the stability of the solution?
Aside from one very brief bad update from the Datadog team around RUM where they broke the native 'fetch' for node in an update to RUM (which was resolved quickly) as it used to -- and may still -- modified the global 'fetch'; Datadog as a whole solution has been highly stable.
What do I think about the scalability of the solution?
It's easy to implement and scale provided a there's a solid IaC solution in place to integrate across your system.
How are customer service and support?
The Datadog support team is prompt and helpful when tickets have been submitted from our end. When their support team have been unsure, they've properly reached out internally to the relevant SME to help answer any questions we've had prior.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
I've personally dabbled with some other open-source observability and monitoring solutions; however, prior to Datadog, our department did not have any solutions other than log dumps to GCP.
How was the initial setup?
The initial setup was straightforward from my own experience, helping integrate within the application and service levels; however, our DevOps team handled most of the infra process with minimal complaints.
What about the implementation team?
We handled the solution in-house.
What's my experience with pricing, setup cost, and licensing?
I personally am not involved in the decision around costing; however, I am aware that when we first set up Datadog, we explicitly configured our services/applications to have a master switch to enable Datadog integration so that we can dynamically enable/disable targeted environments as need due to the costs being associated on a per service basis for APM/logging/etc.
Which other solutions did I evaluate?
I was not involved in the decision-making regarding the evaluation of other options.
What other advice do I have?
I highly recommend Datadog, and I would explore it for my own individual projects in the future, provided the cost is within reason. Otherwise, I would highly recommend it for any medium-to-large-sized org.
Which deployment model are you using for this solution?
Private Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Google
Good query filtering and dashboards to make finding data easier
What is our primary use case?
We use the solution for monitoring microservices in a complex AWS-based cloud service.
The system is comprised of about a dozen services. This involves processing real-time data from tens of thousands of internet connected devices that are providing telemetry. Thousands of user interactions are processed along with real-time reporting of device date over transaction intervals that can last for hours or even days. The need to view and filter data over periods of several months is not uncommon.
Datadog is used for daily monitoring and R&D research as well as during incident response.
How has it helped my organization?
The query filtering and improved search abilities offered by Datadog are by far superior to other solutions we were using, such as AWS CloudWatch. We find that we can simply get at the data we need quicker and easier than before. This has made responding to incidents or investigating issues a much more productive endeavour. We simply have less roadblocks in the way when we need to "get at the data". It is also used occasionally to extract data while researching requirements for new features.
What is most valuable?
Datadog dashboards are used to provide a holistic view of the system across many services. Customizable views as well as the ability to "dive in" when we see someting anomalous has improved the workflow for handling incidents.
Log filtering, pattern detection and grouping, and extracting values from logs for plotting on graphs all help to improve our ability to visualize what is going on in the system. The custom facets allow us to tailor the solution to fit our specific needs.
What needs improvement?
There are some areas on log filtering screens where the user interface can take some getting used to. Perhaps having the option for a simple vs advanced user interface would be helpful in making new or less experienced users comfortable with making their own custom queries.
Maybe it is just how our system is configured, yet finding the valid values for a key/value pair is not always intuitively obvious to me. While there is a pop-up window with historical or previously used values and saved views from previous query runs, I don't see a simple list or enumeration of the set of valid values for keys that have such a restriction.
For how long have I used the solution?
I've used the solution for one year.
What do I think about the stability of the solution?
The solution is very stable.
What do I think about the scalability of the solution?
The product is reasonably scalable, although costs can get out of hand if you aren't careful.
How are customer service and support?
I have not had the need to contact support.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We did use AWS CloudWatch. It was to awkward to use effectively and simply didn't have the features.
How was the initial setup?
We had someone experienced do the initial setup. However, with a little training, it wasn't too bad for the rest of us.
What about the implementation team?
We handled the setup in-house.
What's my experience with pricing, setup cost, and licensing?
Take care of how you extract custom values from logs. You can do things without thought to make your life easier and not realize how expensive it can be from where you started.
Which other solutions did I evaluate?
I'm not aware of evaluating other solutions.
What other advice do I have?
Overall I recommend the solution. Just be mindful of costs.
Which deployment model are you using for this solution?
Public Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Good centralization with helpful monitoring and streamlined investigation capabilities
What is our primary use case?
We utilize Datadog to monitor both some legacy products and a new PaaS solution that we are building out here at Icario which is Micro-Service arch.
All of our infrastructure is in AWS with very few legacies being rackspace. For the PaaS we mainly just utilize the K8s Orchestrator which implements the APM libraries into services deployed there as well as giving us infra info regarding the cluster.
For legacies, we mainly just utilize the Agent or the AWS integration. With APM in specific places. We monitor mainly prod in Legacy and the full scope in the PaaS for now.
How has it helped my organization?
Datadog has greatly improved the time needed to investigate issues. Putting everything into a single pane of glass. Allowing us to get ahead of infra/app-based issues before they affect customer experience with our products.
Outside of that, the ease of management, deployment of agents, integrations etc. has greatly helped the teams. There isn't much leg work needed by the devs to manage or deploy Datadog into their stacks. This is with the use of Terraform, pipelines and the orchestrator. All in all, it has been an improvement.
What is most valuable?
The two most valuable aspects are the Terraform provider for Datadog and the K8s Orchestrator. People don't take that into account when buying into a tooling product like Datadog in this age where scalability, management, and ease of implementation is key. Other tools not having good IaC products or options is a ball drop. Orchestration for the tools agent is good. Not having to use another tool to manage the agents and config files in mutiple places/instances is a huge win!
What needs improvement?
A big problem with Datadog is the billing. They need to make the billing more user-friendly. I know it like the back of my hand at this point, yet trying to explain it to the C-suite as to why costs went up or are what they are is many times more complicated than it needs to be. I can't even say "why" due to of the lack of metadata tied to billing. For instance, with the AWS Integration Host ingestion, I cant say well this month THESE host got added and thats what caused cost to go up. The billing visibility really needs to be resolved!
For how long have I used the solution?
I'd rate the solution for more than four years.
What do I think about the stability of the solution?
Datadog has always been extremely stable, with outages really only ever creating delays, never actual downtime of the service, which is amazing and impressive.
What do I think about the scalability of the solution?
The solution is very scalable if implemented right and not on top of complicated architecture.
How are customer service and support?
Support is excellent. They are always looking for a resolution, and a ticket is never left unresolved unless the feature just can't exist or isn't currently possible.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
We did have New Relic, Datadog, Sumo Logic, Pingdom, and some other custom or third-party tooling. We switched because we wanted everything to be in a single pane and because Datadog is a better solution than the competitors.
How was the initial setup?
For us, set-up is a mixed bag as we support legacy apps and architectures as well as a new microservice architecture. That being said, legacy is somewhat complex just due to the nature of how those apps stack and the underlying infra and configuration and setup. Microservice is a breeze and straight-forward for most of the out-of-the-box stuff.
What about the implementation team?
Our Team of SRE Engineers, Platform Engineers and Cloud Engineers implemented the solution.
What was our ROI?
I can't really speak to ROI; however, from my perspective, we definitely get our money's worth from the product.
What's my experience with pricing, setup cost, and licensing?
Users just just really need to make sure they stay on top of costs and don't let all of the engineers do as they please. Billing with Datadog can get out of hand if you let them. Not everything needs to be monitored.
Which other solutions did I evaluate?
We didn't really need to evaluate other options.
Which deployment model are you using for this solution?
Hybrid Cloud
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Amazon Web Services (AWS)
Improves monitoring and observability with actionable alerts
What is our primary use case?
We are using Datadog to improve our monitoring and observability so we can hopefully improve our customer experience and reliability.
I have been using Datadog to build better actionable alerts to help teams across the enterprise. Also by using Datadog we are hoping to have improved observability into our apps and we are also taking advantage of this process to improve our tagging strategy so teams can hopefully troubleshoot incidents faster and a much reduced mean time to resolve.
We have a lot of different resources we use like Kubernetes, App Gateway and Cosmos DB just to name a few.
How has it helped my organization?
As soon as we started implementing Datadog into our cloud environment people really like how it looked and how easy it was to navigate. We could see the most data in our Kubernetes environments than we ever could.
Some people liked how the logs were color coded so it was easy to see what kind of log you were looking at. The ease of making dashboards has also been greatly received as a benefit.
People have commented that there is so much information that it takes a time to digest and get used to what you are looking at and finding what you are looking for.
What is most valuable?
The selection of monitors is a big feature I have been working with. Previously with Azure Monitor we couldn't do a whole lot with their alerts. The log alerts can sometimes take a while to ingest. Also, we couldn't do any math with the metrics we received from logs to make better alerts from logs.
The metric alerts are ok but are still very limited. With Datadog, we can make a wide range of different monitors that we can tweak in real time because there is a graph of data as you are creating the alert which is very beneficial. The ease of making dashboards has saved a lot of people a lot of time. No KQL queries to put together the information you are looking for and the ability to pin any info you see into a dashboard is very convenient.
RUM is another feature we are looking forward to using this upcoming tax season, as we will have a front-row view into what frustrates customers or where things go wrong in their process of using our site.
What needs improvement?
The PagerDuty integration could be a little bit better. If there was a way to format the monitors to different incident management software that would be awesome. As of right now, it takes a lot of manipulating of PagerDuty to get the monitors from Datadog to populate all the fields we want in PagerDuty.
I love the fact you can query data without using something like KQL. However, it would also be helpful if there was a way to convert a complex KQL query into Datadog to be able to retrieve the same data - especially for very specific scenarios that some app teams may want to look for.
For how long have I used the solution?
I've used the solution for about two years.
Which solution did I use previously and why did I switch?
We previously used Azure Monitor, App Insights, and Log Analytics. We switched because it was a lot for developers and SREs to switch between three screens to try troubleshoot and when you add in the slow load times from Azure it can take a while to get things done.
What's my experience with pricing, setup cost, and licensing?
I would advise taking a close look at logging costs, man-hours needed, and the amount of time it takes for people to get comfortable navigating Datadog because there is so much information that it can be overwhelming to narrow down what you need.
Which other solutions did I evaluate?
We did evaluate DynaTrace and looked into New Relic before settling on Datadog.
Which deployment model are you using for this solution?
Hybrid Cloud
Good for log ingestion and analyzing logs with easy searchability of data
What is our primary use case?
We use Datadog as our main log ingestion source, and Datadog is one of the first places we go to for analyzing logs.
This is especially true for cases of debugging, monitoring, and alerting on errors and incidents, as we use traffic logs from K8s, Amazon Web Services, and many other services at our company to Datadog. In addition, many products and teams at our company have dashboards for monitoring statistics (sometimes based on these logs directly, other times we set queries for these metrics) to alert us if there are any errors or health issues.
How has it helped my organization?
Overall, at my company, Datadog has made it easy to search for and look up logs at an impressively quick search rate over a large amount of logs.
It seamlessly allows you to set up monitoring and alerting directly from log queries which is convenient and helps for a good user experience, and while there is a bit of a learning curve, given enough time a majority of my company now uses Datadog as the first place to check when there are errors or bugs.
However, the cost aspect of Datadog is tricky to gauge because it's related to usage, and thus, it is hard to tell the relative value of Datadog year to year.
What is most valuable?
The feature I've found most valuable is the log search feature. It's set up with our ingestion to be a quick one-stop shop, is reliable and quick, and seamlessly integrates into building custom monitors and alerts based on log volume and timeframes.
As a result, it's easy to leverage this to triage bugs and errors, since we can pinpoint the logs around the time that they occur and get metadata/context around the issue. This is the main feature that I use the most in my workflow with Datadog to help debug and triage issues.
What needs improvement?
More helpful log search keywords/tips would be helpful in improving Datadog's log dashboard. I recently struggled a lot to parse text from raw line logs that didn't seem to match directly with facets. There should be smart searching capabilities. However, it's not intuitive to learn how to leverage them, and instead had to resort to a Python script to do some simple regex parsing (I was trying to parse "file:folder/*/*" from the logs and yet didn't seem to be able to do this in Datadog, maybe I'm just not familiar enough with the logs but didn't seem to easily find resources on how to do this either).
For how long have I used the solution?
I've used the solution for 10 months.
What's my experience with pricing, setup cost, and licensing?
Beware that the cost will fluctuate (and it often only gets more expensive very quickly).
Good visibility into application performance, understanding of end-user behavior, and a single pane of glass view
What is our primary use case?
The primary use case for this solution is to enhance our monitoring visibility, determine the root cause of incidents, understand end-user behaviour from their point of view (RUM), and understand application performance.
Our technical environment consists of a local dev env where Datadog is not enabled, we have deployed environments that range from UAT testing with our product org to ephemeral stacks that our developers use to test there code not on there computer. We also have a mobile app where testing is also performed.
How has it helped my organization?
Datadog has greatly improved our organization in many ways. Some of those ways include greater visibility into application performance, understanding of end-user behavior, and a single pane of glass view into our entire infrastructure.
Regarding visibility, our organization previously used New Relic, and when incidents or regressions happened, New Relic's query language was very hard to use. End-user behavior in RUM has improved our ability to know what to focus on. Lastly, the single pane of glass view with maneuvering between products has helped us truly understand root causes after incidents.
What is most valuable?
APM has been a top feature for us. I can speak for all developers here: they use it more often than other products. Due to a standard in tracing (even though it is customizable), engineers find it easier to walk a trace than to understand what went wrong when looking at logging.
Another feature that I find valuable, though it isn't the first one that comes to mind, is Watchdog. I have found that has been a good source of understanding anomalies and where maybe we (as an organization) need more monitoring coverage.
What needs improvement?
I am not 100% sure how this is done or if it can be though I've had a lot of education I've had to do to ramp developers up on the platform. This feels like the nature of just the sheer growth and number of products Datadog now offers.
When I first started using the Datadog platform, I thought that was a big pro of the company that the ramp-up time was much quicker, not having to learn a query language. I still believe that to be true when comparing the product to someone like New Relic though with the wide range of products Datadog now offers it can be a bit intimidating to developers to know where to go to find what they want.
For how long have I used the solution?
I have been using the solution at my current company for almost four years, and have used it at my previous company as well.
Which solution did I use previously and why did I switch?
A while ago, we used New Relic, and we switched due to Datadog being a better product.
What about the implementation team?
We did the implementation in-house.
What's my experience with pricing, setup cost, and licensing?
The value compared to pricing is reasonable, though it can be a bit of a sticker shock to some.
Which other solutions did I evaluate?
We did not evaluate other options.
Which deployment model are you using for this solution?
Public Cloud
Easy to use with good speed and helpful dashboards
What is our primary use case?
We are using Datadog to improve our cloud monitoring and observability across our enterprise apps. We have integrated a lot of different resources into Datadog, like Kubernetes, App Gateways, App Service Environments, App Service Plans, and other Web App resources.
I will be using the monitoring and observability features of Datadog. Dashboards are used very heavily by teams and SREs. We really have seen that Datadog has already improved both our monitoring and our observability.
How has it helped my organization?
The ease and speed of which you can create a dashboard has been a huge improvement.
The different types of monitors we can create have been huge, too. We can do so many different things with monitors that we couldn't do before with our alerts.
Being able to click on a trace or log and drill down on it to see what happened has been great.
Some have found the learning curve a bit steep. That said,they are coming around slowly. There is just a lot of information to learn how to navigate.
What is most valuable?
The different types of monitors have been very valuable. We have been able to make our alerts (monitors) more actionable than we were able to previously.
Watchdog is a favorite feature among a lot of the devs. It catches things they didn't even know were an issue.
RUM is another feature a lot of us are looking forward to seeing how it can help us improve our customer experience during tax season.
We hope to enable the code review feature at some point to so we can see what code caused the issue.
What needs improvement?
I would like to see the integration between PagerDuty and Datadog improved. The tags in Datadog don't match those in PagerDuty, and we have to make it work. Also, I would like to see if the ability to replicate a KQL query in Datadog is made easier or better.
I would like to see the alert communications to email or phones made better so we could hopefully move off PagerDuty and just use Datadog for that.
There are also a lot of features that we haven't budgeted for yet and I would like for us to be able to use them in the future.
For how long have I used the solution?
I've used the solution for about two years.
Which deployment model are you using for this solution?
Hybrid Cloud