
Overview
Datadog is a SaaS-based unified observability and security platform providing full visibility into the health and performance of each layer of your environment at a glance. Datadog allows you to customize this insight to your stack by collecting and correlating data from more than 600 vendor-backed technologies and APM libraries, all in a single pane of glass. Monitor your underlying infrastructure, supporting services, applications alongside security data in a single observability platform.
Prices are based on committed use per month over total term of the agreement (the Total Expected Use).
Highlights
- Get started in minutes from AWS Marketplace with our enhanced integration for account creation and setup. Turn-key integrations and easy-to-install agent to start monitoring all of your servers and resources in minutes.
- Quickly deploy modern monitoring and security in one powerful observability platform.
- Create actionable context to speed up, reduce costs, mitigate security threats and avoid downtime at any scale.
Details
Unlock automation with AI agent solutions

Features and programs
Trust Center
Buyer guide

Financing for AWS Marketplace purchases
AWS PrivateLink
Quick Launch
Security credentials achieved
(2)


Pricing
| Dimension | Description | Cost/month | Overage cost | 
|---|---|---|---|
| Infra Enterprise Hosts | Centralize your monitoring of systems and services (Per Host) | $27.00 | |
| APM Hosts | Optimize end-to-end application performance (Per APM Host) | $36.00 | |
| App Analytics | Analyze performance metrics (Per 1M Analyzed Spans / 15-day retention) | $2.04 | |
| Custom Metrics | Monitor your own custom business metrics (Per 100 Custom Metrics) | $5.00 | |
| Indexed Logs | Analyze and explore log data (Per 1M Log Events / 15-day retention) | $2.04 | |
| Ingested Logs | Ingest all your logs (Per 1GB Ingested Logs) | $0.10 | |
| Synthetics API Tests | Proactively monitor site availability (Per 10K test runs) | $6.00 | |
| Synthetics Browser Tests | Easily monitor critical user journeys (Per 1K test runs) | $15.00 | |
| Serverless Functions | Deprecated. Not available for new customers | $6.00 | |
| Fargate Tasks | Monitor your Fargate Environment (Per Fargate Task) | $1.20 | 
The following dimensions are not included in the contract terms, which will be charged based on your usage.
| Dimension | Description | Cost/unit | 
|---|---|---|
| Custom dimension used for select private offers | Custom dimension used for select private offers | $1.00 | 
| consumption_unit | Additional Datadog Consumption Units | $0.01 | 
Vendor refund policy
Custom pricing options
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Software as a Service (SaaS)
SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.
Resources
Support
Vendor support
Contact our knowledgable Support Engineers via email, live chat, or in-app messages
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.


FedRAMP
GDPR
HIPAA
ISO/IEC 27001
PCI DSS
SOC 2 Type 2
Standard contract
Customer reviews
Has helped centralize activity monitoring and generate detailed reports for leadership
What is our primary use case?
My main use case for Datadog is logging security signals and monitoring account activity and suspicious behavior within our company.
For monitoring suspicious behavior, we look for alerts with things like unusual sign-in locations, unusual sign-in times, or registering new multi-factor devices in unusual circumstances or locations.
In addition to that, we also look for patterns and frequency of how often MFA is being prompted from individuals.
What is most valuable?
The best features Datadog offers include the ability to generate reports very quickly and put in extensive filtering to get very specific information.
The report generation and filtering help me in my day-to-day work by assisting in generating reports for higher-ups and turning data into actionable items.
Since using Datadog, it has positively impacted our organization by giving us a one-stop shop for multiple applications and services that we can analyze in one spot.
Having a one-stop shop has made things easier for my team, and we have seen specific outcomes such as saving a lot of time.
What needs improvement?
Datadog could be improved if the menu system was a little clearer and less cluttered, making it easier to navigate.
Additionally, more documentation is always beneficial to have.
For how long have I used the solution?
I have been using Datadog for about three years.
What do I think about the stability of the solution?
Datadog is very stable.
What do I think about the scalability of the solution?
Its scalability is good, and it has kept up as our organization has grown or changed.
How are customer service and support?
I have not had to reach out to customer support, so I cannot comment on that experience.
How would you rate customer service and support?
Negative
Which solution did I use previously and why did I switch?
I did not previously use a different solution before Datadog.
What was our ROI?
While I don't have any specifics on money saved, I can say that it has definitely improved our efficiency overall.
What's my experience with pricing, setup cost, and licensing?
My experience with pricing, setup cost, and licensing for Datadog shows that the pricing is very fair and setup has been very simple and easy to do.
Which other solutions did I evaluate?
Before choosing Datadog, I did not evaluate other options.
What other advice do I have?
My advice to others looking into using Datadog is to read the documentation. I would rate this product a 9 out of 10.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Comprehensive Monitoring Tool with Powerful Insights but High Costs
Custom dashboards and alerts have made server issue detection faster
What is our primary use case?
My main use case for Datadog is monitoring our servers.
A specific example of how I'm using Datadog to monitor my server is that we are maintaining request and latency and looking for errors.
What is most valuable?
I really enjoy the user interface of Datadog, and it makes it easy to find what I need. In my opinion, the best features Datadog offers are the customizable dashboards and the Watchdog.
The customizable dashboards and Watchdog help me in my daily work because they're easy to find and easy to look at to get the information I need. Datadog has positively impacted my organization by making finding and resolving issues a lot easier and efficient.
What needs improvement?
I think Datadog can be improved by continually finding errors and making things easy to see and customize.
For how long have I used the solution?
I have been using Datadog for one month.
What do I think about the stability of the solution?
Datadog is stable.
What do I think about the scalability of the solution?
Datadog's scalability has been easy to put on each server that we want to monitor.
How are customer service and support?
I have not had to contact customer support yet, but I've heard they are great.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
We previously used our own custom solution, but Datadog is a lot easier.
What was our ROI?
I'm not sure if I've seen a return on investment.
What's my experience with pricing, setup cost, and licensing?
My experience with pricing, setup cost, and licensing is that it was easy to find and easy to purchase and easy to estimate.
Which other solutions did I evaluate?
I did not make the decision to evaluate other options before choosing Datadog.
What other advice do I have?
I would rate Datadog a nine out of ten.
I give it this rating because I think just catching some of the data delays and latency live could be a little bit better, but overall, I think it's been great.
I would recommend Datadog and say that it's easy to customize and find what you're looking for.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Has resolved user errors faster by reviewing behavior with replay features
What is our primary use case?
My main use case for Datadog involves working on projects related to our sales reps in terms of registering new clients, and I've been using Datadog to pull up instances of them while they're beta testing our product that we're rolling out just to see where their errors are occurring and what their behavior was leading up to that.
I can't think of all of the specific details, but there was a sales rep who was running into a particular error message through their sales registration process, and they weren't giving us a lot of specific screenshots or other error information to help us troubleshoot. I went into Datadog and looked at the timestamp and was able to look at the actual steps they took in our platform during their registration and was able to determine what the cause of that error was. I believe if I remember correctly, it was user error; they were clicking something incorrectly.
One thing I've seen in my main use case for Datadog is an option that our team can add on, and it's the ability to track behavior based on the user ID. I'm not sure at this time if our team has turned that on, but I do think that's a really valuable feature to have, especially with the real-time user management where you can watch the replay. Because we have so many users that are using our platform, the ability to filter those replay videos based on the user ID would be so much more helpful. Especially in terms where we're testing a specific product that we're rolling out, we start with smaller beta tests, so being able to filter those users by the user IDs of those using the beta test would be much more helpful than just looking at every interaction in Datadog as a whole.
What is most valuable?
The best features Datadog offers are the replay videos, which I really find super helpful as someone who works in QA. So much of testing is looking at the UI, and being able to look back at the actual visual steps that a user is taking is really valuable.
Datadog has impacted our organization positively in a major way because not even just as a QA engineer having access to the real-time replay, but just as a team, all of us being able to access this data and see what parts of our system are causing the most errors or resulting in the most frustration with users. I can't speak for everybody else because I don't know how each other segment of the business is using it, but I can imagine just in terms of how it's been beneficial to me; I can imagine that it's being beneficial to everybody else and they're able to see those areas of the system that are causing more frustration versus less.
What needs improvement?
I think Datadog can be improved, but it's a question that I'm not totally sure what the answer is. Being that my use case for it is pretty specific, I'm not sure that I have used or even really explored all of the different features that Datadog offers. So I'm not sure that I know where there are gaps in terms of features that should be there or aren't there.
I will go back to just the ability to filter based on user ID as an option that has to be set up by an organization, but I would maybe recommend that being something part of an organization's onboarding to present that as a first step. I think as an organization gets bigger or even if the organization starts using Datadog and is large, it's going to be potentially more difficult to troubleshoot specific scenarios if you're sorting through such a large amount of data.
For how long have I used the solution?
I have been working in this role for a little over a year now.
What do I think about the stability of the solution?
As far as I can tell, Datadog has been stable.
What do I think about the scalability of the solution?
I believe we have about 500 or so employees in our organization using our platform, and Datadog seems to be able to handle that load sufficiently, as far as I can tell. So I think scalability is good.
How are customer service and support?
I haven't had an instance where I've reached out to customer support for Datadog, so I do not know.
How would you rate customer service and support?
Which solution did I use previously and why did I switch?
I do not believe we used a different solution previously for this.
What was our ROI?
I cannot answer if I have seen a return on investment; I'm not part of the leadership in terms of making that decision. Regarding time saved, in my specific use case as a QA engineer, I would say that Datadog probably didn't save me a ton of time because there are so many replay videos that I had to sort through in order to find the particular sales reps that I'm looking for for our beta test group. That's why I think the ability to filter videos by the user ID would be so much more helpful. I believe features that would provide a lot of time savings, just enabling you to really narrow down and filter the type of frustration or user interaction that you're looking for. But in regards to your specific question, I don't think that's an answer that I'm totally qualified to answer.
Which other solutions did I evaluate?
I was not part of the decision-making process before choosing Datadog, so I cannot speak to whether we evaluated other options.
What other advice do I have?
Right now our users are in the middle of the beta test. At the beginning of rolling the test out, I probably used the replay videos more just as the users were getting more familiar with the tool. They were probably running into more errors than they would be at this point now that they're more used to the tool. So it kind of ebbs and flows; at the beginning of a test, I'm probably using it pretty frequently and then as it goes on, probably less often.
It does help resolve issues faster, especially because our sales reps are used to working really quickly in terms of the sales registration, as they're racing through it. They're more likely to accidentally click something or click something incorrectly and not fully pay attention to what they're doing because they're just used to their flow. Being able to go back and watch the replay and see that a person clicked this button when they intended to click another button, or identifying the action that caused an error versus going off of their memory.
I have not noticed any measurable outcomes in terms of reduction in support tickets or faster resolution times since I started using Datadog. For myself, looking at the users in our beta test group, none of those came as a result of any sort of support ticket. It came from messages in Microsoft Teams with all the people in the beta group. We have resulted in fewer messages in relation to the beta test because they are more familiar with the tool. Now that they know there might be differences in terms of what their usual flow is versus how their flow is during the beta test group, they are resulting in fewer messages because they are probably being more careful or they've figured out those inflection points that would result in an error.
My biggest piece of advice for others looking into using Datadog would be to use the filters based on user ID; it will save so much time in terms of troubleshooting specific error interactions or occurrences. I would also suggest having a UI that's more simple for people that are less technical. For example, logging into Datadog, the dashboard is pretty overwhelming in terms of all of the bar charts and options; I think having a more simplified toggle for people that are not looking for all of the options in terms of data, and then having a more technical toggle for people that are looking for more granular data, would be helpful.
I rate Datadog 10 out of 10.
Which deployment model are you using for this solution?
Has improved incident response with better root cause visibility and supports flexible on-call scheduling
What is our primary use case?
We use Datadog for all of our observability needs and application performance monitoring. We recently transitioned our logs to Datadog . We also use it for incident management and on-call paging. We use Datadog for almost everything monitoring and observability related.
We use Datadog for figuring out the root cause of incidents. One of the more recent use cases was when we encountered a failure where one of our main microservices kept dying and couldn't give a response. Every request to it was getting a 500. We dug into some of the traces and logs, used the Kubernetes Explorer in Datadog, and found out that the application couldn't reach some metric due to its scaling. We were able to figure out the root cause because of the Kubernetes Event Explorer in Datadog. We pushed out a hotfix which restored the application to working condition.
Our incident response team leverages Datadog to page relevant on-calls for whatever service is down that's owned by that team, so they can get the appropriate SMEs and bring the service back up. That's the most common use case for our incident response. All of our teams appreciate using Datadog on-call for incident response because there are numerous notification settings to configure. The on-call schedules are very flexible with overrides and different paging rules, depending on urgency of the matter at stake.
What is most valuable?
As an administrator of Datadog, I really appreciate Fleet Automation. I also value the overall APM page for each service, including the default dashboards on the service page because they provide exactly what you need to see in terms of request errors and duration latency. These two are probably my favorite features because the service page gives a perfect look at everything you'd want to see for a service immediately, and then you can scroll down and see more infrastructure specific metrics. If it's a Java app, you can see JVM metrics. Fleet Automation really helps me as an administrator because I can see exactly what's going on with each of my agents.
My SRE team is responsible for upgrading and maintaining the agents, and with Fleet Automation, we've been able to leverage remote agent upgrades, which is fantastic because we no longer need to deploy to our servers individually, saving us considerable time. We can see all the integration errors on Fleet Automation, which is super helpful for our product teams to figure out why certain metrics aren't showing up when enabling certain integrations. On Fleet Automation, we can see each variant of the Datadog configuration we have on each host, which is very useful as we can try to synchronize all of them to the same version and configuration.
The Kubernetes Explorer in Datadog is particularly valuable. It gives us a look at each live pod YAML and we can see specific metrics related to each pod. I appreciate the ability to add custom Kubernetes objects to the Orchestration Explorer. It gives our team an easier time to see pods without having to kubectl because sometimes you have permission errors related to that. Sometimes it's just quicker than using kubectl.
Our teams use Datadog more than they used their old observability tool. They're more production-aware, conscious of how their changes are impacting customers, how the changes they make to their application speed up or slow down their app, and the overall request flow. It's a much more developer-friendly tool than other observability tools.
What needs improvement?
Datadog needs to introduce more hard limits to cost. If we see a huge log spike, administrators should have more control over what happens to save costs. If a service starts logging extensively, I want the ability to automatically direct that log into the cheapest log bucket. This should be the case with many offerings. If we're seeing too much APMÂ , we need to be aware of it and able to stop it rather than having administrators reach out to specific teams.
Datadog has become significantly slower over the last year. They could improve performance at the risk of slowing down feature work. More resources need to go into Fleet Automation because we face many problems with things such as the Ansible role to install Datadog in non-containerized hosts.
We mainly want to see performance improvements, less time spent looking at costs, the ability to trust that costs will stay reasonable, and an easier way to manage our agents. It is such a powerful tool with much potential on the horizon, but cost control, performance, and agent management need improvement. The main issues are with the administrative side rather than the actual application.
For how long have I used the solution?
I have been using Datadog for about a year and nine months.
What do I think about the stability of the solution?
We face a high amount of issues with niche-specific outages that appear to be quite common. AWSÂ metrics being delayed is something that Datadog posts on their status page. We face a relatively high amount of Datadog issues, but they tend to be small and limited in scope.
What do I think about the scalability of the solution?
We have not experienced any scalability issues.
How are customer service and support?
I have interacted with support. Support quality varies significantly. Some support agents are fantastic, but some tickets take months to resolve.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
We used Dynatrace previously, and I believe the switch was due to cost, but that decision was outside my scope as I'm not a decision-maker in that situation.
How was the initial setup?
The initial setup in Kubernetes is not particularly difficult.
What other advice do I have?
I cannot definitively say MTTR has improved as I don't have access to those numbers and don't want to make misleading statements. Developers use it significantly more than our old observability tool. We've seen some cost savings, but we have to be significantly more cost-aware with Datadog than with our previous observability tool because there's more fluctuation and variation in the cost.
One pain point is that it has caused us to spend too much time thinking about the bill. Understand that while it is an administrative hassle, it is very rewarding to developers.
On a scale of 1-10, I rate Datadog an 8 out of 10.