The use cases for Splunk Real User Monitoring (RUM) are almost the same as Dynatrace, for people who want to do infrastructure monitoring, application monitoring, or check their front-end monitoring, such as monitoring for the real user. They are actually using Splunk Real User Monitoring (RUM).

External reviews
External reviews are not included in the AWS star rating for the product.
Room for improvement with competitive landscape balanced by useful monitoring features
What is our primary use case?
What is most valuable?
The best advantages and features in the product are notable. Splunk Real User Monitoring (RUM) seemed to have advantages in 2015, but when Cisco acquired AppDynamics, they stopped upgrading for a few years. Their advantage now is that they are the only solution that can monitor above SAP. Apart from that, Datadog and Dynatrace have superior features. Splunk Real User Monitoring (RUM) has one great advantage in that if they have SAP users, they can monitor their SAP applications.
In terms of features, UI, or the ability to do monitoring of RUM, the picture by UI or the ability to actually do the monitoring is pretty good, but not as good as Dynatrace, Catchpoint, Datadog, and New Relic.
What needs improvement?
My thoughts on room for improvement in general relate to Cisco trying to mount the Cisco cloud solution when they have AppDynamics, but after acquiring Splunk, they changed their roadmap. It's inevitable that when a big company has resources but changes the roadmap, it takes time to establish well-structured manuals and guidelines. I believe they will get better, but right now, they need some improvement.
For how long have I used the solution?
I have been selling Splunk Real User Monitoring (RUM) for almost the same duration as NetScout; it's been 3 years.
What was my experience with deployment of the solution?
In terms of deployment, Dynatrace is the easiest by far. For Datadog and Splunk Real User Monitoring (RUM), you have to work with scripts, meaning you need an engineer for deployment; Dynatrace is straightforward and really easy.
What do I think about the stability of the solution?
Regarding stability of RUM, stability is also a problem because during the POC, we ask customers not to deploy RUM or EUM scripts in the actual stage. We encourage them to test in staging because even if those solutions are certified and tested, customers have many different environments. Unlike NetScout or regular agents for APM, RUM has many problems during the POC phase because customer environments vary widely.
What do I think about the scalability of the solution?
In terms of scalability, the solution is scalable enough, but sometimes implementation is hard. However, it doesn't take more than a month. The issue is mainly about pricing because if they want to monitor more, it costs money. Those who have a great pricing plan or volume table will gain an advantage.
How are customer service and support?
Support from Splunk is not very helpful because Splunk doesn't have a dedicated APM; they only have one APM engineer in Korea.
If I were to rate technical support for Splunk Real User Monitoring (RUM) from 1 to 10 points, I would give it a score of 5.5. I appreciate the engineer, but since there's only one person doing everything, it's not easy. Cisco doesn't hire enough personnel, and I heard they don't have enough staff in APAC. Datadog and Dynatrace, especially Datadog, cover a huge market share in Asia, and that's why other competitors are not investing enough at this time.
How would you rate customer service and support?
Neutral
What about the implementation team?
My clients, such as Hyundai and POSCO, are using RUM; most of them use Dynatrace. For those using Splunk Real User Monitoring (RUM), it took time to implement because our engineer tried really hard.
What was our ROI?
Regarding whether the solution provides ROI or savings, I understand that even Dynatrace, Datadog, or Splunk has a low price per transaction, but it concerns the number of data and sessions. People struggle to comprehend the total budget, which might be massive. Customers need to understand this, and I'm not sure if it's possible; maybe some companies just decided to take the entire market and cut down prices, but anyone working in front-end management should recognize the market price to see the true value of end-user monitoring.
What other advice do I have?
I am the general manager of this company, and a team leader focusing on Dynatrace, Cisco AppDynamics, and NetScout, while we used to do some other APM solutions but not extensively. We only focus on those three solutions.
We try to sell Splunk User Behavior Analytics, but we haven't been able to sell it so far because Splunk used to be a great company in Korea. After Cisco acquired it, it's complicated because when two companies merge together, they have issues with who will be the product seller, the sales personnel or the engineer. Their organization is not fully merged yet, which leads to fewer marketing and sales activities.
The integration in real user monitoring from Splunk is actually another problem because of some vacancies when they merged the companies and changed the organizations; there are issues with the manual not being complete. When you work with integration, Datadog and Dynatrace have issues but have to work with the manual; AppDynamics has many people and support. The satisfaction level is around six compared to Dynatrace, which might be a seven or eight. The innovation level is not complete, but they are working on it.
When using real user monitoring to analyze performance bottlenecks, the most critical metrics include data from Catchpoint, which has extensive data to gather, and Dynatrace also gathers substantial data. However, sometimes it's unnecessary because customers in charge of front-end monitoring should see the problems immediately and act with reporting to their superior. A simple UI is crucial; that's perhaps why Datadog has the advantage with their comprehensive UI.
Regarding pricing for RUM, it becomes another problem because Splunk and Dynatrace and Datadog don't ask customers to use RUM for every single session. Many large customers want to monitor every session, resulting in a big gap. For example, Korean Air, which is among the top five air travel companies in the world, uses Datadog but changed to Dynatrace because, even with a limited budget, Dynatrace managed to persuade them to monitor just 10% of user sessions to reduce the budget, and they are satisfied with the results.
This review rates Splunk Real User Monitoring (RUM) 5 out of 10.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
- Mark review as helpful
Provides real-time visibility for improved operational performance
What is our primary use case?
We are using the Splunk Observability Cloud for monitoring purposes and troubleshooting, and we are using that infrastructure in real time, in which we have infrastructure monitoring, application monitoring, log observer, and RUM synthetic monitoring. For troubleshooting purposes, we are installing the open telemetry collector agent on some of the servers, including Intel, Windows, and UNIX servers.
I have also worked on the agent upgrade from version 0.103 to 0.1113, which is ongoing right now.
How has it helped my organization?
We are also using the dashboards and detectors in Splunk Observability Cloud. For client needs, we are creating dashboards, reports, and detectors as well. For the detectors, we mostly work on host-down situations. When a server is down, we troubleshoot using the detector infra host down and identify the root cause of the failure, such as why it was down or not reporting to Splunk Observability Cloud. We find out the root cause by using that detector when the alert gets triggered and cleared.
We use the tracing features in the Splunk Observability Cloud, primarily for application performance monitoring. It helps us figure out service maps for root cause analysis. It provides visibility and helps address blind spots in data collection.
Splunk Observability Cloud offers a transparent, customized tool with real-time visibility. We use AWS, ReactJS, Python, and Java for tracing. It helps create customized dashboards and service maps based on customer requirements. It has AI that automatically generates visualizations, allowing us to create more reports based on customer needs. My seniors are primarily working on creating dashboards, reports, and for monitoring purposes.
Their technical team is performing well. About a year ago, Splunk Observability Cloud was slow and lacked features compared to now. It didn't provide exact details for any searched server in the metrics, but the situation has improved significantly, and we can now retrieve complete data on when servers were down or up.
What is most valuable?
The best features in Splunk Observability Cloud are the metrics; I can see any logs or anything related to the server or services we want to monitor, and the metrics are a good function. It provides exact details. It offers unified visibility for logs, metrics, and traces.
What needs improvement?
In Splunk Observability Cloud, I notice room for improvement in synthetic monitoring. It does not provide output based on server names. It only gives a response when we input a URL. I'm not sure if this issue is specific to my organization, but it would be beneficial if server details could be retrieved directly in synthetic monitoring.
For how long have I used the solution?
I have been using this solution for two years and two months.
What do I think about the stability of the solution?
I would rate its stability an eight out of ten.
What do I think about the scalability of the solution?
I would rate its scalability an eight out of ten.
Around 100+ users access Splunk Observability Cloud in my organization, including the cloud SRE team, Windows Intel team, Linux team, and AD team.
My client base primarily consists of enterprise financial services.
How are customer service and support?
If any issues arise, we can raise a vendor case, and resolutions are provided in a timely and accurate manner.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
In my organization, we also work with Sentry, Datadog, PagerDuty, and Dynatrace. Splunk Observability Cloud offers more features than Datadog, which also provides APM monitoring, log observer, and metrics, but does not match the feature set of Splunk Observability Cloud.
How was the initial setup?
It is a bit complicated. For deploying Splunk Observability Cloud, we first need an access token, after which we connect to our AWS Cloud account and provide the access token. We must set up CloudWatch or AWS Lambda and forward the metrics or logs from all sources to AWS.
The implementation took about 45 days.
What was our ROI?
The return on investment varies based on requirements; for smaller tasks, we can leverage our team's capabilities effectively, so I can estimate around a 20% efficiency gain.
Currently, we are providing outputs to clients within the required time frames. If a client requests any dashboard, logs, APM monitoring, or synthetic monitoring, we have been able to deliver output on time, achieving approximately an 80% efficiency in response.
What's my experience with pricing, setup cost, and licensing?
Splunk Observability Cloud is expensive.
What other advice do I have?
For operational performance, we created monitoring within the Splunk Observability Cloud for most servers with agent installation. We upgraded the open telemetry collector from version 0.82 to 0.103, then again to a newer version, enhancing visibility and use cases, especially after the upgrade, which has improved operational purposes.
My impressions of Splunk Observability Cloud for focusing on business-critical initiatives are positive. I manage six tools, but Splunk Observability Cloud is one of my favorites, and I aspire to build my career specializing in it because it has great features, more attention in the market, and is a relatively new tool with promising growth.
I would recommend Splunk Observability Cloud to other users for its accurate data fetching, dashboard creation, report generation, and synthetic monitoring capabilities.
I would rate Splunk Observability Cloud a nine out of ten.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Adopted global standards enhances data collection and simplifies monitoring
What is our primary use case?
The solution involves observability in general, such as Application Performance Monitoring, and generally addresses digital applications, web applications, sites, and mobile applications. I worked with it in two companies: one in the energy sector and one in the hotel sector.
The Splunk teams helped us with data collection, instrumentation, and many other options.
How has it helped my organization?
The testing and monitoring of infrastructure is useful. We also use it for many metrics and can use it effectively for troubleshooting and for detection. It's very helpful.
What is most valuable?
With Splunk Observability Cloud, I appreciate working with open telemetry. The standards of open telemetry are especially useful for collecting data such as traces, matrices, and logs. Splunk respects the standards of open telemetry. This is beneficial. Many clients work with AWS and the cloud in general with multiple solutions such as Datadog, Dynatrace, and Splunk. Working with the standard open telemetry is very advantageous. Splunk Observability Cloud is very simple for users in general, including developers, DevOps, and data teams. It's more straightforward compared to Dynatrace.
There are many out-of-the-box solutions proposed by Splunk, such as dashboards for AWS instances, EC2, Fargate, and Lambda. It's very helpful for beginning, especially for monitoring, and the detectors for alerting help understand how the platforms work.
The no-sample feature is great. It eliminates blind spots.
After completing the instrumentations, we have many dashboards and tests for monitoring infrastructure, particularly CPU and memory. We also use applicative metrics such as JVM, Java Runtime, and many other applicative metrics and testing. For troubleshooting, we can detect problems in seconds, which is particularly helpful for digital teams.
AI analytics have the potential for a lot of functionality. The detectors for alerting may prove useful.
When we deploy the instrumentation in the application, we can start using the dashboards immediately. The dashboard building is very helpful for starting work.
It's beneficial for monitoring performance and infrastructure, especially when deploying applications with multiple versions with Git. It's important to detect performance issues, such as CPU consumption or memory consumption, particularly over time in Java and Python.
For other teams, they need help and guidance to use custom metrics. For observability engineers and specialists, it's straightforward, but for others, it can be challenging.
The solution overall is very valuable for me.
The time to value was immediate. Once we deployed, we started to use the dashboard directly and began detecting issues.
Saving time with automation can save us weeks. It's improving our resilience. It helps us detect issues and increase performance.
The solution has been very useful for helping us focus on business-critical initiatives.
What needs improvement?
Regarding dashboard customization, while Splunk has many dashboard building options, customers sometimes need to create specific dashboards, particularly for applicative metrics such as Java and process terms. These categories of dashboards would be very helpful for customers.
For how long have I used the solution?
I started working with Splunk Observability Cloud in 2023.
What do I think about the stability of the solution?
The system is relatively stable. We rarely have problems accessing the dashboard or the page. We encounter problems in the Splunk platform very rarely.
What do I think about the scalability of the solution?
It's very scalable. We haven't experienced any problems with the instrumentation or scalability. On a scale of one to ten, I'd rate it a ten.
We've used the solution across more than 250 people, including engineers.
How are customer service and support?
I would rate Splunk technical support at six out of ten.
When we have a problem and need to create a case, the response isn't quick. They often require multiple questions, with five or six emails to get a response. Problem resolution typically takes between two and five days, which isn't very helpful. However, sometimes we do receive quicker solutions.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
We used legacy solutions such as Grafana and Prometheus. There are several differences between Splunk Observability Cloud and these solutions. We used Grafana as a monitoring solution, however, it's not truly observability. We used OpenSearch for logs, Prometheus for metrics, and Grafana to work with Prometheus. That said, it's not equivalent. Observability is different.
We're also familiar with Datadog and Dynatrace.
How was the initial setup?
The implementation took between two and three weeks.
For cloud deployment, it's straightforward. We can use GitLab and DevOps CI/CD. For on-premise deployment, such as Linux and deployment with satellite, it's easy yet requires some work to configure the configuration files.
Updates are generally needed, especially for the open telemetry version or SDK. However, regarding the platform itself, we don't need to do anything.
What was our ROI?
I worked with my company when they used the solution, so I'm not certain about the history of how long it took to detect problems. However, for mean time to detect, and mean time to respond, I'm sure it's very helpful, and we can estimate a minimum improvement of 20%.
What other advice do I have?
We're a customer and end-user.
Currently, in France, we cannot use the artificial intelligence option. While this option is enabled for the United States and many countries, it's not yet available in France. However, the solution with detectors, especially for alerting, is important for us.
I recommend it, especially for teams using legacy monitoring.
I would rate Splunk Observability Cloud nine to ten out of ten.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Seamless issue detection with user time tracking and application load analysis
What is our primary use case?
We primarily use Splunk Real User Monitoring to analyze performance bottlenecks and application transactions. It allows us to see how applications are experienced on the user side, making it easy to capture any bottlenecks or performance issues.
What is most valuable?
The most valuable features include user time tracking and the ability to analyze application load times. Splunk provides advanced notifications of roadblocks in the application, which helps us to improve and avoid impacts during high-volume days. It is very useful for identifying performance bottlenecks.
What needs improvement?
It would be beneficial to have more enhanced features with capabilities to adapt more integrated applications. Improvements in dashboard configuration, customization, and artificial intelligence functionalities are desired. There is room for improvement in customer support due to delays and standard feedback responses.
For how long have I used the solution?
I have been working with Splunk Real User Monitoring for almost two years.
What do I think about the stability of the solution?
In terms of stability, I would rate it a nine out of ten. It is a very stable solution.
What do I think about the scalability of the solution?
Splunk Real User Monitoring is definitely scalable. I would rate its scalability a nine out of ten.
How are customer service and support?
Technical support is rated an eight. There is some delay in their in-depth responses and standard answers to questions.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
I worked with Splunk alongside Dynatrace. Before Splunk, I did not use any other services.
How was the initial setup?
It takes about an hour to set up the client for real-time monitoring.
What about the implementation team?
We have a separate team for deployment, consisting of about three to four people.
What was our ROI?
We have achieved a return on investment between 10% to 20% as it helped in removing roadblocks, which could lead to more savings with wider usage.
What's my experience with pricing, setup cost, and licensing?
Splunk is a little expensive, however, it is in line with the current market pricing. I would rate the pricing an eight on a scale of one to ten, as it reflects the going rate in the market.
What other advice do I have?
I would recommend this product to other users because of its capabilities in monitoring and analytics.
I rate the overall solution eight out of ten, considering the comparison with other products like Dynatrace.
Which deployment model are you using for this solution?
Customized dashboards streamline log monitoring needs
What is our primary use case?
Splunk is primarily used for log monitoring, where I collect all my security logs, system logs, and application logs into a centralized place. This helps me customize my monitoring models.
How has it helped my organization?
Splunk has provided me with a centralized platform to manage multiple features. Instead of using various products, Splunk offers everything in one solution, which adds value to my organization.
What is most valuable?
The most valuable feature is the ability to customize dashboards based on my queries or any other customization I may need.
What needs improvement?
I'm still experiencing some features of the product. However, in future updates, I would like to see more predefined monitoring query solutions, which could be more effective.
For how long have I used the solution?
I have been using Splunk Synthetic Monitoring for almost five years, primarily focusing on log monitoring.
What do I think about the stability of the solution?
Overall, the product is stable, and I would rate it an eight out of ten.
What do I think about the scalability of the solution?
For scalability, I would give it a nine out of ten.
How are customer service and support?
Technical support is good but could be improved, particularly concerning the time taken for ticket resolution.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
The main reason for choosing Splunk over other products is its comprehensive capabilities and flexible customization options. It is widely used and provides cloud solutions.
How was the initial setup?
The initial setup was quite straightforward, and agent installation can be done quickly. However, the entire setup process might involve multiple people due to organizational policies.
What about the implementation team?
The implementation process involved around five to ten people due to our organization's processes and need for multiple approvals.
What was our ROI?
Using Splunk has saved my organization about 30% of our budget compared to using multiple different monitoring products.
What's my experience with pricing, setup cost, and licensing?
Splunk is a bit expensive since it charges based on the indexing rate of data. However, considering the features it provides, the pricing is quite affordable compared to other monitoring solutions.
What other advice do I have?
Overall, I would recommend Splunk to anyone seeking a monitoring solution, thanks to its extensive capabilities and features.
I'd rate the solution nine out of ten.
Which deployment model are you using for this solution?
Optimizes application performance and has an effective service map
What is our primary use case?
The main purpose of using Splunk APM is to optimize our application. We use Splunk APM primarily to understand how the application works, how it uses resources, and its response time in connection with different infra services. It is mainly used for application optimization and reviewing third-party application dependency response times.
How has it helped my organization?
Splunk APM helps us to identify long-running queries and long-running functions or methods, as well as third-party dependencies that are not responding on time. We are easily able to see the error or trace it. A developer can easily find out the issue without having to dig into the application.
We normally do not use the Tag Spotlight functionality, but our developers use this functionality when we are trying to dig into the logs. It helps to search the data that we want to see. It helps to troubleshoot the actual problem and visualize the data. We can see how the error is coming and how many reports are coming.
Splunk APM has helped us to optimize the application performance, find out when third-party services go down, and monitor our application within our SLA. It allows us to minimize our downtime. We can send timely notifications to our users. It mainly helps us to optimize application performance, and secondly, we are able to generate alerts based on the data that we receive from Splunk.
Splunk APM helps us to find errors immediately and resolve them. We are able to find some of the errors within five minutes. It minimizes the time to identify errors. There are about 30% to 40% time savings.
What is most valuable?
The best feature is the service map that they have. I have used multiple APM solutions such as Datadog and Elastic. They have a service map, but it does not work like Splunk APM. Splunk APM provides a holistic view of the application. Unlike other APMs, Splunk's service map is quite effective.
We suggested they provide an alert based on insert services. We told them that they have all the data, so why not have an alert on the insert service? They took feedback from us and added that feature. That feature helps us identify if any third-party dependent is down.
What needs improvement?
There is room for improvement in the alerting system, which is complicated and has less documentation available. We sometimes encountered issues in setting up alerts. The custom detector could be more simplified to assist system engineers in setting up alerts with ease.
For how long have I used the solution?
We tested Splunk APM last year and officially started using it this year. It has been about a year.
What do I think about the stability of the solution?
Splunk APM is stable. I would rate its stability a nine out of ten, as it delivers on its promises.
What do I think about the scalability of the solution?
We have not had to scale it. Our clients are medium enterprises.
How are customer service and support?
The support is responsive, though it could use some improvement. In the past, we contacted their support about a feature. They did respond to us, but they did not explicitly inform us about the feature's absence. Instead, they directed us to try various resources or articles. They did not have a clear answer. I would rate them a five out of ten for customer service.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
Before using Splunk APM, I used Elastic APM and Datadog. Splunk APM is better than them. Splunk's service map and support for our existing libraries were significant reasons for the switch. The previous vendor required library updates that we could not accommodate, but Splunk supported our existing setups.
How was the initial setup?
The initial setup of Splunk APM was easy and straightforward. It took around a week.
What's my experience with pricing, setup cost, and licensing?
It appears to be expensive compared to competitors.
What other advice do I have?
Splunk APM is suitable for enterprise solutions, particularly for those deeply involved in technical business. The service map and overall stability make it a robust choice for such needs.
I would rate Splunk APM a nine out of ten.
Collaborates performance metrics with log data to pinpoint the exact cause of issues and offers error detection
What is our primary use case?
We use Splunk in APM to monitor our applications. So, we integrated it into our systems to enhance our monitoring and observing capabilities, especially for our microservices.
So, we have used Splunk APM for this.
How has it helped my organization?
APM integrates well with Splunk’s other observability solutions. These logs with application performance monitoring can significantly impact our business in several positive ways, like troubleshooting and root cause analysis.
Using these logs with APM, we can collaborate performance metrics with log data. It allows us to pinpoint the exact cause of issues, such as identifying specific errors in the logs. Because of this, we have access to faster resolution and detailed logs alongside performance metrics, enabling quicker diagnosis and resolution of problems. It also helps us minimize downtime and improve system reliability.
Additionally, it improves our performance optimization with detailed insights and analyzing historical log data along with APM metrics. This allows us to understand long-term trends and make informed decisions about performance improvements and better user experience, like error reduction and proactive monitoring.
Splunk has reduced our mean time to resolution by 30%.
If there is any issue in Splunk; we’ll identify the issue first and look for the error messages, like alerting with the Splunk user interface or in the logs that might indicate what the issue is and then determine which part of Splunk is affected.
Then, we’ll refer to the Splunk official documentation and check the system's health. We’ll review the logs. By following these steps, I can resolve the issues with Splunk, ensuring that our monitoring and analytics capabilities remain effective.
What is most valuable?
Mainly, I like Splunk APM because it will show the errors compared to other tools. We use the dashboards to monitor our applications. It will tell us the errors, and we can solve them quickly.
I have used APM but haven’t used Trace Analyzer, though I have some knowledge of it. We are able to implement it. We have some Trace Log Points in Splunk APM to catch the errors. We have a special graph for it where we can see the red points.
We use OpenTelemetry. OpenTelemetry and Splunk APM are similar in terms of observability and monitoring. We use it for observability standardization, which allows us to collect traces and metrics, making it easier to work with different monitoring tools, including Splunk APM. It is more flexible because it allows us to instrument our applications without being locked into a specific monitoring vendor.
It supports collecting traces, metrics, and logs from our applications, providing a comprehensive view of our performance and health endpoints. This data can be fed into Splunk APM, giving us in-depth analysis and insights about our application.
What needs improvement?
Splunk APM is a robust tool with many capabilities. There are always areas for potential improvement to enhance its functionality and user experience.
For Splunk APM, there could be simplified navigation, like streamlining the user interface to make navigation more intuitive for our users, especially those new to APM, which can enhance usability. We can provide more customization options for dashboards and visualizations to help users tailor the platform to their specific needs.
There could be more integration capabilities with a wider range of third-party tools and platforms would also be beneficial. By focusing on these areas, Splunk APM can enhance its value proposition, improve user satisfaction, and better meet the evolving needs of organizations monitoring their application performance.
For how long have I used the solution?
I have been using it for a year.
What do I think about the stability of the solution?
I never had an issue with the stability. It worked fine.
Which solution did I use previously and why did I switch?
My team has used alternatives to Splunk APM, like Datadog and New Relic.
How was the initial setup?
The initial setup was easy. To fully deploy it, we had to add some signal effects into our applications and just deploy it. It took like 20 minutes. That’s it.
What about the implementation team?
We took some help from our teams and my senior manager and also from other teams across our company. We connected and did all this together.
For deployment, one person can do it, actually, but as we are junior developers, we took help from our senior manager, like three to four people.
Splunk is good like this now. I don’t think any updates would be required, but there are some regular updates and upgrades of Splunk APM, like software updates, version upgrades, and all.
These provide more powerful monitoring capabilities and help ensure the system remains reliable, secure, and aligned with organizational needs. Regular updates, performance tuning, and proactive management help in maximizing these benefits of the Splunk solution.
What was our ROI?
We’ll see the results after the deployment. It’s not that late, and that’s the reason we are using Splunk APM.
Splunk made our job easier in a way. It will give the points when we use any dashboards, and there are no delays in everything, like performance. It will give the error issues very clearly, and it will monitor 24/7. It will show the issues, and it is very effective. It will pinpoint the exact cause of the issues, and it will help us troubleshoot the issues very fast.
It benefits the IT staff in other teams, like operations, improves efficiency, and manages the IT environments more effectively. When it centralizes the logs and search analytics, the powerful capabilities allow IT teams to perform in-depth troubleshooting, identify root causes, and analyze complex issues with ease.
Splunk also provides real-time visibility into IT infrastructure, and we have connected with cross-functional teams around our team to work with Splunk APM. It supports proactive management, enhances security, and improves operational efficiency. It facilitates better collaboration across the team.
What's my experience with pricing, setup cost, and licensing?
The pricing is based on several factors, including the scale of deployment. The pricing model typically includes considerations like the number of hosts, features, and capabilities.
What other advice do I have?
Overall, I would rate the solution a nine out of ten.
Which deployment model are you using for this solution?
Enables me to supervise the flow and simulate the conditions of the repository across several dashboards
What is our primary use case?
We use Splunk to monitor some devices in the company. We have several cloud groups for monitoring the energy companies in the state. The stack has several devices to monitor if you have a problem. There is a mixture of solutions.
How has it helped my organization?
The solution monitors the system in real-time. We can find the resources and investigate security incidents. Splunk and another solution, AppDynamics, monitor several devices.
We integrate Splunk with a data collection solution, and it plugs in the users to collect data at several points in the network and infrastructure. The data is indexed in Splunk, which can be visualized in different dashboards. Monitoring for fraud is critical for the company because you have to resolve many problems in the infrastructure with federal information in the dashboard.
What is most valuable?
The company has many systems that the customer pays to access. Splunk APM issued via AppDynamics helps find problems in the feed. It reduces the risk of supervising all the devices. I can supervise the flow and simulate the conditions of the repository across several dashboards to show what's happening at the moment.
What needs improvement?
The dashboards are used mainly to visualize information about the infrastructure, but it isn't easy to construct or use the dashboards. While we tried to resolve the issue by calling support, it would be easier if they had an AI co-pilot to identify the problem and help you solve it.
For how long have I used the solution?
I have been using Splunk APM.
What do I think about the scalability of the solution?
Splunk APM isn't easy to scale because you have to follow the steps and implement best practices, which can be a little awkward.
How are customer service and support?
I rate Splunk support 10 out of 10. We had good documentation, and the support team at Splunk has a lot of experience with code and the tool.
How would you rate customer service and support?
Positive
How was the initial setup?
I haven't had any problems deploying Splunk. When I installed Splunk for the first time, I thought the product line was complex because I had to build the solution. After working on it for a while, it has become easier to do the solution next time.
What was our ROI?
Splunk APM is a crucial tool because it controls all the systems and solves a lot of problems.
What other advice do I have?
I rate Splunk APM 8.5 out of 10. It's an excellent solution.
It provides a holistic view and accurate information, but it is difficult to manage
What is our primary use case?
We utilize Splunk APM for security purposes, monitoring all transactions within the organization to prevent potential attacks. Additionally, we leverage Splunk APM to analyze application logs, gaining insights into application behaviour and facilitating a reduction in Mean Time To Resolution should any issues arise in the production environment.
How has it helped my organization?
OpenTelemetry provides more accurate information about an application by combining views from the customer perspective, infrastructure metrics, and application-specific data. This holistic view enables full telemetry observability, allowing us to analyze and strategize effectively for our company or clients.
What is most valuable?
Once configured correctly, the analysis reporting the Splunk APM provides is better than that of the other APM tools. Once the correct fields are defined, we can create different report dashboards.
What needs improvement?
Splunk isn't an ideal tool for application performance management due to the extensive setup required. It necessitates various configurations to gather diverse information from applications, networks, or other sources. Creating the right tables and defining the appropriate fields to extract comprehensive data involves a significant amount of setup within the tool. Managing this process can be quite challenging. However, once configured, the collected information is invaluable, although not easily manageable.
Splunk falls short compared to other APM tools such as AppDynamics or Datadog. It does not collect online information in real time and relies heavily on log files. Unlike Datadog, which collects real-time application behaviour data like CPU, memory, load, and response time, Splunk requires additional configuration to obtain similar information. This makes using Splunk for APM purposes significantly more difficult compared to the automatic data collection capabilities of AppDynamics or Datadog.
For how long have I used the solution?
I have been using Splunk APM for more than a decade.
What do I think about the scalability of the solution?
Splunk APM lacks scalability, requiring the administrator to constantly monitor or create specific alerts to ensure sufficient disk space, CPU, and memory for data collection and transaction processing. This results in a tool that is challenging to manage and costly to maintain.
How are customer service and support?
Splunk support is responsive and provides quick resolutions when tickets are opened. Their service has left a positive impression on me.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial deployment is complex, requiring the definition of the switch, storage, correct host, and working with certification. This necessitates at least one expensive specialist, costing approximately $5,000 per month to hire and work with our team.
What's my experience with pricing, setup cost, and licensing?
Splunk APM is expensive. Even before we begin, we need substantial infrastructure investment to collect comprehensive logs. For example, to gather log data, we must create specific tables in Splunk, starting at 50 gigabytes. In a cloud environment, this storage requirement becomes very costly.
What other advice do I have?
I would rate Splunk APM six out of ten.
Cisco recently acquired Splunk, and its roadmap for the coming year includes incorporating aspects of Splunk into AppDynamics. Cisco's intention behind combining these two tools is to showcase its commitment to open telemetry and comprehensive observability to the market and its customers.
Useful to find statistical similarities between different traces
What is our primary use case?
I use the solution in my company primarily for distributed tracing and metrics troubleshooting. I use the tool to troubleshoot incidents and find the root cause of errors when something goes wrong. I also personally use it to have a developer's understanding of what is going on in my application. Sometimes, there is a case where you might put your application in a library or a new library, and that library also makes calls somewhere. Splunk APM's monitoring can show you that there is a call you are making now that you never used to make in the prior version of the library. In these cases, which you may not know just by looking at the external view of the application code, the tracing part traces everything, including the lowest types of supports.
How has it helped my organization?
The main benefit of the tool I have noticed in the solution is reduced time for the resolution of incidents. The meantime to resolve can help pinpoint the root causes of the issues because you see the connections on the graph in Tag Spotlight. It is easier to pinpoint who is responsible for the incident, especially when you have a larger organization. You have teams that ride services where they need to talk to different services from different teams rather than having to hand off instant resolutions from one team to another. You can often find it much more quickly from the first instance of the problem occurring with the product in place. The tool specifically helps your sites move up more frequently, and then when it does go down, the solution finds the root cause and gets it back up as fast as possible.
What is most valuable?
The most valuable feature of the solution, and my favorite, is always Tag Spotlight, especially considering the way they slice and dice all of Splunk APM's traces by span attributes.
I like the tool because it looks at a whole set of traces in aggregate, which means that it can find statistical similarities between different traces. Often, the cases are such that you will find some traces that show an error and have some other common attribute, which is much more apparent when you look at the feature known as Tag Spotlight rather than just looking at an overall metric. I like Tag Spotlight as it is one of the most simple to use features.
The meantime to resolve, or MTTR, can help pinpoint the root causes of the issues because you see the connections on the graph in Tag Spotlight. I don't personally have metrics associated with MTTR. I am more of the implementer of making certain that all the data is going in and looking at the debugging part. I am not a part of the set of people who keep track of the tool's MTTR.
In our company's case, we have reasonably good metrics related to the meantime to detect. I can't get a rough number when it comes to the meantime to detect, so I don't know for sure. My guess is that we often detect problems reasonably well. Our company figures out that there is some problem, but we just don't know where it is, so I feel that if there is an improvement, then it is mostly in the area of meantime to resolve. When it comes to the meantime to detect, I think our existing metrics are probably sufficient, and adding Splunk APM makes it much easier to detect the resolution time.
The tool has improved our organization's business resilience. In terms of resilience, in the tool, it is possible not to have downtime and make certain things up and running. The faster you get to web pages working again, the more people can actually do things that they want to do, such as trade players on their NFL Fantasy teams. In general, it gives out a better business result.
What needs improvement?
In our company's case, we have some very high throughput services, so they might be getting 10,000 requests per second. Currently, Splunk APM and Splunk Observability want to do things in a way that wants you to send every single span for every single request that is a part of the 10,000 requests per second. The process may give you all the data in the back end, but a lot of data, including CPU memory and network costs, is involved in sending data to Splunk. My feeling is that it would be nice if there were an easier way to send only a sample of my traces, which means that I send 10 percent or 5 percent, and then Splunk would extrapolate on the back end. It is obvious that with 10 percent of traces, the real metrics are something like ten times with a plus or minus margin of error. I am okay with the plus or minus margin of error because I think when you have a high enough request rate, you will see such problems appear even in a lower sample population. The process is political polling. You don't call all 150,000,000 people in the US and ask them who they are going to vote for, and I feel it is better if you choose to take a sample of maybe 10,000 and then extrapolate your findings to the rest. I feel the same should be applicable to trace something in Splunk APM.
For how long have I used the solution?
I have been using Splunk APM for two years.
What do I think about the stability of the solution?
I really haven't noticed anything going wrong with the tool's stability, and I haven't seen any downtime. I don't know if my company is necessarily measuring the stability part by ourselves, but at least for me, it is a pretty growing and solid solution.
What do I think about the scalability of the solution?
There is one issue with the tool's scalability. In our company, we are fairly big in terms of the number of containers we have, especially since we can run very large clusters. When you look at some of the charts, it will say 30,000 time series, reached the limit, and cannot show anymore, or it states that a particular data may not be complete. For me, it is a problem that I would like to see fixed. I have spoken to Spunk's team about it, and they have told me that they do recognize the issue and that other people have also mentioned the same problem. Once you see the issues related to the scalability part, you need to understand that it is a warning triangle. After seeing the warning triangle, you need to realize that you cannot trust any of the numbers you see in the chart because it is not a complete, full data set. I want the tool to either tell me that it can't show me the numbers or that I need to find some way to show all the numbers in a more summarized view. The tool asks you to filter things down more, but it would be nice to offer specific suggestions as to what you could filter down to get it into a more specific or reasonable number. In some cases, my company just has to have a number, considering that we have 1,00,000 containers. If I want to know how many containers are running, currently, the way the backend works in a way where it requires to know how many different time series there are, and then it just says that the 30,000 limit has been reached, but when it happens, I don't know whether it is for 1,00,000 containers, 1,20,000 containers or 80,000 containers.
How are customer service and support?
The technical support team for the solution is good for our company. My company has a weekly meeting with Splunk's sales support team, and if there are any issues, we bring them up for discussion. I have seen that the technical support team is super responsive.
Which solution did I use previously and why did I switch?
My company has its own internal solution, which was built ten to fifteen years ago, and it has progressed over time, but it is only ever used to support metrics and events, not for tracing. In short, it is not used for Splunk APM-related stuff, which is a big change that makes a difference for us.
How was the initial setup?
The product's deployment phase is good and very easy because it is done with OpenTelemetry for most of the parts. The product's deployment is not some custom thing where you have to deploy a particular agent that belongs to a particular company and put it on every single host. It is very easy to follow OpenTelemetry's models for the most part. Splunk is a very big contributor to OpenTelemetry, and I value it. It consists of the reasons I recommend using Splunk as a backend provider. In my company, we are more open to being more of an OpenTelemetry-compliant organization instead of going for other vendors.
What was our ROI?
I can't speak about the tool's ROI since I get paid, but I don't have to spend money on the product.
What's my experience with pricing, setup cost, and licensing?
I don't have much insight into the costs and licensing area attached to the tool. I am the engineer and developer, not the person who writes the checks in the company. I know that my company has a Splunk Enterprise Security license which is used for logging and even for Splunk Observability.
What other advice do I have?
I think the tool has the best trace aggregation features compared to what I have seen in different products, and I feel Tag Spotlight is a good example of it. A lot of the other products support tracing, but when you look at them, you see that they show one trace at a time. I can deep dive into one trace at a time, but what I want to find is commonality across the traces. I think it will give the tool a high grade for all its features. I rate the tool highly since it offers a very good Kubernetes integration. With a lot of data, you can see which part the Kubernetes host is running on, switch between them, and see the application metrics and the actual infrastructure metrics. Seeing it all together can be very useful.
I rate the tool a nine out of ten.