Splunk Infrastructure Monitoring helps identify bottlenecks within the network domain, including issues related to server databases, application response times, and code. These problems can be resolved by our customers promptly.

Splunk Observability Cloud
SplunkExternal reviews
External reviews are not included in the AWS star rating for the product.
Is easy to use, and improves performance, but does not monitor network devices
What is our primary use case?
How has it helped my organization?
It is easy to use. It offers a unique dashboard reporting tool called Ollie. Ollie is essentially an observability tool, and it's also referred to simply as "Ollie" for brevity. It's important to note that this product is agent-based only.
Splunk Infrastructure Monitoring helps improve the efficiency and performance of applications by up to 70 percent.
It has helped reduce our mean time to detect. It has helped to reduce our mean time to resolve by around 50 percent.
Splunk helps us focus on business-critical initiatives.
It integrates well with multiple sets of products.
What is most valuable?
The vibrant dashboards are valuable.
What needs improvement?
The main drawback of Splunk for network monitoring is its limited agent deployment. Splunk excels at collecting data from servers and databases where agents can be installed. However, it cannot directly monitor network devices, unlike Broadcom.
Broadcom offers Spectrum and Performance Management tools that primarily work on SNMP to collect data from network devices. Splunk doesn't have a directly comparable functionality for network devices.
While Splunk offers a wider range of data collection, including metrics, logs, and more, it can be more expensive. Splunk's licensing model is based on data volume (terabytes) rather than the number of devices. This can be costlier compared to Broadcom or similar tools, which often use device-based licensing.
The end-to-end visibility is lacking because Splunk cannot directly monitor network devices.
Broadcom provides a topology-based root cause analysis that is not available with Splunk.
For how long have I used the solution?
I have been using Splunk Infrastructure Monitoring for 10 years.
What do I think about the stability of the solution?
Splunk Infrastructure Monitoring is stable.
How was the initial setup?
Splunk deployment is simplified because it is cloud-based. The deployment takes no more than 15 days to complete.
What's my experience with pricing, setup cost, and licensing?
Splunk's infrastructure monitoring costs can be high because our billing is based on data volume measured in terabytes, rather than the number of devices being monitored.
Replacing legacy systems with Splunk could cost up to $200,000.
What other advice do I have?
I would rate Splunk Infrastructure Monitoring 7 out of 10.
The decision to move from another infrastructure monitoring solution to Splunk should be based on a customer's specific needs. While Splunk offers visually appealing dashboards and access to a wider range of data compared to Broadcom products, pricing can be a significant factor, especially in the Indian market.
Deploying Splunk for a customer can involve higher upfront infrastructure costs. This is because implementing Splunk effectively often requires writing custom queries to filter data and optimize license usage. While this approach minimizes licensing costs, it can be labor-intensive.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Used for troubleshooting purposes and to understand the bottlenecks of applications
What is our primary use case?
We use Splunk APM to understand and know the inner workings of our cloud-based and on-premises applications. We use the solution mainly for troubleshooting purposes and to understand where the bottlenecks and limits are. It's not used for monitoring purposes or sending an alert when the number of calls goes above or below some threshold.
The solution is used more for understanding and knowing where your bottlenecks are. So, it's used more for observability rather than for pure monitoring.
What is most valuable?
The solution's service map feature allows us to have a holistic overview and to see quickly where the issues are. It also allows us to look at every session without considering the sampling policy and see if a transaction contains any errors. It's also been used when we instrument real use amounts from the front end and then follow the sessions back into the back-end systems.
What needs improvement?
Splunk APM should include a better correlation between resources and infrastructure monitoring. The solution should define better service level indicators and service level objectives. The solution should also define workloads where you can say an environment is divided up by this area of back end and this area of integration. The solution should define workloads more to be able to see what is the service impact of a problem.
For how long have I used the solution?
I've been using Splunk APM in my current organization for the last 2 years, and I've used it for 4-5 years in total.
What do I think about the stability of the solution?
Splunk APM is a remarkably stable solution. We have only once encountered an outage of the ingestion, which was very nicely explained and taken care of by the Splunk team.
I rate the solution a 9 out of 10 for stability.
What do I think about the scalability of the solution?
Around 50 to 80 users use the solution in our organization. The solution's scalability fits what we are paying for. On the level of what we pay for, we have discovered both the soft limit and the hard limit of our environment. I would say we are abusing the system in terms of how scalable it is. Considering what we are paying for, we are able to use the landscape very well.
We have plans to increase the usage of Splunk APM.
How are customer service and support?
Splunk support itself leaves room for improvement. We have excellent support from the sales team, the sales engineers, the sales contact person, and our customer success manager. They are our contact when we need to escalate any support tickets. Since Splunk support is bound not to touch the consumer's environment, they cannot fix issues for us. It's pretty straightforward to place a support ticket.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We have previously used AppDynamics, Dynatrace, and New Relic. We see more and more that Splunk APM is the platform for collaboration. New Relic is more isolated, and each account or team has its own part of New Relic. It's very easy to correlate and find the data within an account. Collaborating across teams, their data, and their different accounts is very troublesome.
With Splunk APM, there is no sensitivity in the data. We can share the data and find a way to agree on how to collaborate. If two environments are named differently, we can still work together without infecting each other's operations.
How was the initial setup?
If you're using the more common languages, the initial deployment of Splunk APM is pretty straightforward.
What about the implementation team?
The solution's deployment time depends on the environment. If the team uses the cloud-native techniques of TerraForm and Ansible, it's pretty straightforward. The normal engagement is within a couple of weeks. When you assess the tool they need and look at the architecture and so on, the deployment time is very, very minimal. Most of the time spent internally is caused by our own overhead.
What's my experience with pricing, setup cost, and licensing?
We have a very good conversation with our vendor for Splunk APM. We have full transparency regarding the different license and cost models. We have found a way to handle both the normal average load and the high peak that some of our tests can cause. Splunk APM is a very cost-efficient solution. We have also changed the license model from a host-based license model to a more granular way to measure it, such as the number of metric time series or the traces analyzed per minute.
We have quite a firm statement that for every cost caused within Splunk, you need to be able to correlate it to an IT project or a team to see who the biggest cost driver is. As per our current model, we are buying a capacity, and we eventually want to have a pay-as-you-go model. We cannot use that currently because we have renewed our license for only one year.
What other advice do I have?
We are using Splunk Observability Cloud as a SaaS solution, but we have implemented Splunk APM on-premises, hybrid, and in the cloud. We are using it for Azure, AWS, and Google. Initially, the solution's implementation took a couple of months. Now, we are engaging more and more internal consumers on a weekly basis.
We implement the code and services and send the data into the Splunk Observability Cloud. This helps us understand who is talking to whom, where you have any latencies, and where you have the most error types of transactions between the services.
Most of the time, we do verification tests in production to see if we can scale up the number of transactions to a system and handle the number of transactions a business wants us to handle at a certain service level. It's both for verification and to understand where the slowness occurs and how it is replicated throughout the different services.
We can have full fidelity and totality of the information in the tool, and we don't need to think about the big variations of values. We can assess and see all the data. Without the solution's trace search and analytics feature, you will be completely blind. It's critical as it is about visibility and understanding your service.
Splunk APM offers end-to-end visibility across our environment because we use it to coexist with both synthetic monitoring and real user monitoring. What we miss today is the correlation to logs. We can connect to Splunk Cloud, but we are missing the role-based access control to the logs so that each user can see their related logs.
Visualizing and troubleshooting our cloud-native environment with Splunk APM is easy. A lot of out-of-the-box knowledge is available that is preset for looking at certain standard data sets. That's not only for APM but also for the available pre-built dashboards.
We are able to use distributed tracing with Splunk APM, and it is for the totality of our landscape. A lot of different teams can coexist and work with the same type of data and easily correlate with other systems' data. So, it's a platform for us to collaborate and explore together.
We use Splunk APM Trace Analyzer to better understand where the errors originate and the root cause of the errors. We use it to understand whether we are looking at the symptom or the real root cause. We identify which services have the problem and understand what is caused by code errors.
The Splunk Observability Cloud as a platform has improved over time. It allows us to use profiling together with Splunk Distribution of OpenTelemetry Collector, which provides a lot of insights into our applications and metadata. The tool is now a part of our natural workbench of different tools, and it's being used within the organization as part of the process. It is the tool that we use to troubleshoot and understand.
Our organization's telemetry data is interesting, not only from an IT operational perspective but also to understand how the tools are being used and how they have been providing value for the business. It is a multifaceted view of the data we have, and it is being generated and collected by the solution.
Splunk APM has helped reduce our mean time to resolve. Something that used to take 2-3 weeks to troubleshoot is now done within hours. Splunk APM has freed up some resources if we are going to troubleshoot. If you spend a lot of time troubleshooting something and can't find a problem, we cannot close the ticket saying there's no resolution. With Splunk APM, we can now know for sure where we have the problem rather than just ignoring it.
Splunk APM has saved our organization around 25% to 30% time. It's a little bit about moving away from firefighting to be preventive and estimate more for the future. That's why we are using it for performance. The solution allows us to help and support the organization during peak hours and be preventative with the bottlenecks rather than identify them afterward.
Around 5-10 people were involved in the solution's initial deployment. Integrating the solution with our existing DevOps tools is not part of the developer's IDE environment, and it's not tightly connected. We have both subdomains and teams structured. Normally, they also compartmentalize the environment, and we use the solution in different environments.
Splunk APM requires some life cycle management, which is natural. In general, once you have set it up, you don't need to put much effort into it. I would recommend Splunk APM to other users. That is mainly due to how you collaborate with the data and do not isolate it. There is a huge advantage with Splunk. We are currently using Splunk, Sentry, and New Relic, and part of our tool strategy is to move to Splunk.
As a consumer, you need to consider whether you are going to rely on OpenTelemetry as part of your standard observability framework. If that is the case, you should go for Splunk because Splunk is built on OpenTelemetry principles.
Compared to other tools using proprietary agents and proprietary techniques, you may have more insights into some implementations. However, you will have a tighter vendor lock-in, and you won't have the portability of the back end. If you rely on OpenTelemetry, then Splunk is the tool for you.
Overall, I rate the solution a 9 out of 10.
Provides end-to-end visibility, simplifies application performance monitoring, and makes monitoring logs easy
What is our primary use case?
We use Splunk APM for performance testing.
How has it helped my organization?
Splunk offers end-to-end visibility across our environment.
Splunk APM simplifies application performance monitoring. It also provides insights into data quality, including data security, integration, ingestion, and versioning of trace logs. We can directly inject data for monitoring purposes, trace the data flow, and monitor metric values.
Splunk can ingest data in any format, allowing us to easily monitor logs and identify blockages through timestamps, which saves us time.
What is most valuable?
The most valuable feature is dashboard creation. This allows us to easily monitor everything by setting the data we want to see. For example, imagine we're working on a project within the application. There might be different environments, such as development, testing, and production environments. In the production environment, we can use dashboards to monitor customer activity, like account creation or other user data. This gives us a clear view of how transactions are performing and user response times. This dashboard creation feature is one of the most beneficial aspects of Splunk that I've used in a long time. While Splunk offers many features, including integration with various DevOps tools, its core strength lies in data monitoring and collection.
What needs improvement?
Splunk's functionality could be improved by adding database connectors for other platforms like AWS and Azure.
For how long have I used the solution?
I have been using Splunk APM for one year.
Which solution did I use previously and why did I switch?
We previously used a legacy application for monitoring and when it was decommissioned we adopted Splunk APM.
What's my experience with pricing, setup cost, and licensing?
Splunk offers a 14-day free trial and after that, we have to pay but the cost is reasonable.
What other advice do I have?
I would rate Splunk APM eight out of ten.
Splunk APM requires minimal maintenance and can be monitored by a team of three.
Provides great visibility, analysis, and data telemetry
What is our primary use case?
We use Splunk APM to monitor the performance of our applications.
How has it helped my organization?
Splunk APM offers end-to-end visibility across our entire environment. We need to control how many types of metrics are ingested by Splunk APM from all incoming requests. While we allow some metrics to be collected, Splunk APM provides the ability to track each request from its starting point to its endpoint at every stage.
Splunk APM trace analyzer allows us to analyze a request by providing its trace ID. This trace ID gives us a detailed breakdown of how the request entered the system, how many services it interacted with along the way, and its overall path within the system. We can also identify any errors that occurred during the request's processing and track any slowness or latency issues. This information is very helpful for troubleshooting performance problems in our application.
Splunk APM telemetry data has been incredibly valuable. While we faced challenges with Splunk Enterprise, such as the lack of a trace analyzer, Splunk APM's user interface is modern and highly flexible. The wide range of data it provides has significantly improved our incident response times, allowing us to quickly create alerts and adhere to the infrastructure as code principle. Splunk APM also proves beneficial during load testing, contributing to a positive impact on our overall infrastructure performance analysis.
Splunk APM helps us reduce our mean time to resolution. With its fast and accurate alerting system, we can quickly identify the exact location of issues. This pinpoint accuracy streamlines the investigation process, leading to faster root-cause analysis.
Splunk APM has helped us save significant time. We're now spending less time resolving production incidents and analyzing performance data. This focus on Splunk APM allows us to dedicate more time to other areas.
What is most valuable?
Detectors are a powerful feature. They create signal flow code in a format similar to Splunk APM language. For example, if we select five conditions, the detector can automatically generate the code for that signal flow. This code can then be directly integrated into our Terraform modules, streamlining the creation of detectors using Terraform. This is particularly helpful because our infrastructure adheres to a well-defined practice, and detectors help automate this process.
APM dashboards are another valuable tool. They provide more comprehensive information than traditional spotlights. One particularly useful feature is the breakdown of a trace ID. This breakdown allows us to see the entire journey of a request, including where it originated, any slowdowns it encountered, and any issues it faced. This level of detail enables us to track down the root cause of performance problems for every request.
What needs improvement?
We currently lack log analysis capabilities in Splunk APM. Implementing this functionality would be very beneficial. With log analysis, we could eliminate our dependence on Splunk Enterprise and rely solely on APM. The user interface design of APM seems intuitive, which would likely simplify setting up log-level alerts. Currently, all log-level alerting is done through Splunk Enterprise, while infrastructure-level alerting has already transitioned to Splunk APM.
The Splunk APM documentation on the official Splunk website could benefit from additional resources. Specifically, including more examples of adapter creation and management using real-world use cases would be helpful. During our setup process, we found the documentation lacked specific implementation details. While some general information was available on public platforms like Google and YouTube, it wasn't comprehensive. This suggests that others using Splunk APM in the future might face similar challenges due to the limited information available on social media. It's important to remember that many users rely on social media for setup guidance these days.
For how long have I used the solution?
I have been using Splunk APM for 1.5 years.
What do I think about the stability of the solution?
While Splunk APM occasionally experiences slowdowns, it recovers on its own. Fortunately, these haven't resulted in major incidents because most maintenance is scheduled for weekends, with ample notice provided in advance. We have never experienced any data loss that occurred during previous slowdowns.
How are customer service and support?
Splunk APM customer support is helpful. They promptly acknowledge requests and provide regular updates. They've been able to fulfill all our information requests so far. However, Splunk APM is a constantly evolving product. This means there are some limitations due to ongoing industry advancements. They are actively working on incorporating customer feedback, such as the CV request. Overall, the customer support is excellent, but the desired features may not all be available yet.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
Previously, we used Grafana, but we faced challenges that led us to switch to Splunk APM. Since then, Splunk has become our primary tool for data analysis. In our experience, Splunk offers several advantages over Grafana. Setting up and using Splunk is significantly easier than Grafana. Splunk provides a user-friendly interface that allows anyone to start working immediately, while Grafana's setup can be more complex. Splunk also boasts superior reliability. Its architecture utilizes a master-slave node structure, with the ability to cluster for redundancy. This ensures that if a node goes down, another available node automatically takes over, minimizing downtime. Ultimately, our decision to switch to Splunk was driven by several factors: user-friendliness, a wider range of features, cost-effectiveness, and its established reputation. Splunk is a globally recognized and widely used tool, which suggests a higher level of trust and support from the industry.
We use Splunk Enterprise and Splunk APM. Splunk APM offers a comprehensive view of various application elements. We primarily migrated to APM to gain application-level metrics. This includes latency issues, which are delays in processing user requests. Splunk APM generates a unique trace ID for each user request. This allows us to track the request from the user to our servers and identify any delays or errors that occur along the way.
Additionally, Splunk APM utilizes detectors to create alerts based on specific metrics. We've implemented alerts for CPU and memory usage, common issues in our Kubernetes infrastructure. We can also track container restarts within the cluster and pinpoint the causes. Another crucial area for us is subscription latency. Splunk APM allows us to monitor this metric and identify any performance bottlenecks. This capability was absent in Splunk Enterprise, necessitating the switch to APM. Furthermore, Splunk APM enables us to track application status codes, such as 404 errors.
Splunk APM facilitates the creation of informative dashboards using collected metrics. Additionally, the Metrics Explorer tool allows us to investigate specific metrics of interest and generate alerts or customized spotlights.
Spotlights are tailored visualizations that track metrics for critical application areas. They can trigger alerts based on unexpected changes, such as a sudden increase in error codes over a set timeframe. This provides a more proactive approach to identifying potential issues compared to traditional detector-based alerts.
Splunk APM empowers us to effectively monitor various metrics during load testing. This includes analyzing memory usage across ten to eleven metrics, tracking container restarts during flow testing, and verifying the functionality of auto scaling mechanisms. The comprehensive visualization capabilities of Splunk APM surpass those of Splunk Enterprise, making it ideal for analyzing large sets of metrics and graphs.
We're currently exploring the integration of an OpenTelemetry agent with Splunk APM. This will enable us to collect and transmit a wider range of data, including application metrics, latency metrics, and basic infrastructure metrics such as CPU, memory, etc.
How was the initial setup?
During the initial Splunk deployment, I found that most information available on social media platforms catered to enterprise deployments. Fortunately, many of our new hires had prior Splunk experience, which eased the initial learning curve. Splunk's widespread adoption across industries also meant there was a general familiarity with the tool among the team. Additionally, the comprehensive documentation proved helpful. Overall, the initial rollout went smoothly, though there were some challenges that we were able to resolve.
The Splunk deployment was done on multiple environments. We started with development and then deployed to a staging environment, which sits between development and production. As expected, the development deployment took the longest. The total time for the entire deployment, including my cloud setup, was 2 to 3 weeks. It's important to note that this timeframe isn't solely dependent on Splunk implementation. Other factors can influence the timeline, such as network requests, firewall changes, and coordination with IT teams for license purchases. While the development deployment took longer, promoting Splunk to the staging and production environments was significantly faster. It only took 1 week for each environment.
What about the implementation team?
Our cloud deployment didn't require a consultant, but we used one for our on-premise enterprise deployment, which was a bit more complex.
What other advice do I have?
I would rate Splunk APM 9 out of 10.
The maintenance required is minimal because the cluster deployment helps ensure there is always 1 node working.
Which deployment model are you using for this solution?
The dashboards are great, and we get solid visibility across our environment
What is our primary use case?
I have the logs of my applications, and they're usually a bit volatile. The log switch doesn't stay there on the application for a long time, so Splunk can require that. It can take 15 days for the logs to be available to do some kind of research. I'm using Splunk to ingest application logs, create dashboards, and set up alerts.
How has it helped my organization?
The biggest benefit of Splunk is that we can retain logs and correlate the data. Telemetry data has a huge impact because it's much easier to see everything.
Splunk has significantly reduced our mean resolution time. The workflow at my company involves application microservices applications running on the cloud. These logs are highly volatile, so they're only retained for three to five minutes, and we had to reproduce an issue to trace why it failed. That meant we had to do everything again to capture the log at the moment. Now, we have the data to analyze one or two hours.
What is most valuable?
Splunk's dashboards are great. The solution provides end-to-end visibility across my environment. Visualizing large amounts of data is easier because we can correlate the data from any target source.
What needs improvement?
The licensing model is expensive. We need to monitor the amount of data ingested because the cost is based on the data collected.
For how long have I used the solution?
I have used Splunk APM for three years now.
What do I think about the stability of the solution?
We have instances for production and development. I've never seen the production instance go down. Our development instance has gone down, but that's expected.
Which solution did I use previously and why did I switch?
I used tools like Elasticsearch, which is similar to Splunk. I've also used other observability tools like Grafana and Dynatrace, but they have different features.
What other advice do I have?
I rate Splunk APM 10 out of 10.
Improves operational efficiency and integrates very well
What is our primary use case?
We mostly work with developers. They run some pipelines, and they use Splunk as a platform to identify the errors, instead of themselves debugging the logs and understanding what the issue is. This is one side of the business. On the other side of the business, we use the Splunk database for frozen buckets where we archive the data.
We can easily integrate it with other tools for monitoring our entire IT data infrastructure. I also handle AppDynamics. We have integrated Splunk and AppDynamics. With one click, we can understand what the actual issue is. It brings down the time to resolve. We have had some good experiences.
How has it helped my organization?
It improves our operational efficiency every day. In my previous company, we had integrated it with ServiceNow. For defined alerting conditions, it could directly open up a ticket for the right team. We did not have to look into a thousand cases to understand a problem.
In terms of integrations, most of the plugins are already available. If a plugin is not available, even then it is pretty easy to integrate. There are multiple ways to integrate. You can use the REST API and just forward the data. It can be easily integrated.
It makes it easy to have end-to-end visibility in the cloud environment. There are multiple types of devices in an environment. You might have AWS, Microsoft Azure, or something else. It operates beautifully. It is easy to integrate. This is the best part.
I am in the banking industry. It helps to keep track of how well our application is performing when somebody tries to do a transaction. There are multiple pieces to it, and we keep track of everything. We have our own business dashboard that the top-tier leaders can look into. All the visibility is there because of it.
What is most valuable?
I find the monitoring console very helpful. With one click, I can see how we are performing, and at the same time, I can see what data is flowing.
What needs improvement?
The clustering part of indexes can be more refined.
They can cut down a bit at the monetary level for the long-time customers. We recently had a scenario where we were in discussions to see if there was any flexibility from Splunk's side.
For how long have I used the solution?
I have been using this solution for the past two years. I have also used it in my previous company.
What do I think about the scalability of the solution?
It is pretty scalable. I would rate it a nine out of ten for scalability.
Which solution did I use previously and why did I switch?
I have worked with Kibana and Logstash, but they are not comparable to this solution.
What's my experience with pricing, setup cost, and licensing?
It is expensive.
What other advice do I have?
Overall, I would rate it an eight out of ten.
Provides threat intelligence, good visibility, and detects threats faster
What is our primary use case?
Typically, the standard approach for Splunk sizing involves gathering data from the entire IT environment, regardless of whether it's hardware, virtualized, or application-based. This data is then collected and monitored through Splunk as a comprehensive security solution. We also work with Splunk-related platforms like Application Performance Monitoring to provide a holistic view of system performance. Recently, we implemented this solution for a bank in Jetar. Splunk excels at collecting high-volume data from networks, making it ideal for performance monitoring and scaling. During the sizing process, it's crucial to calculate the daily data ingestion rate, which determines the amount of data Splunk Enterprise needs to process and visualize for security purposes. Several factors need consideration when sizing Splunk: tier structure hot and cold buckets, customer use cases for free quota access, and storage choices based on data access frequency. Hot buckets typically utilize all-flash storage for optimal performance and low latency, while less frequently accessed data resides in cold or frozen buckets for archival purposes. In essence, the goal is to tailor the Splunk solution to meet the specific needs and usage patterns of each customer.
One challenge that our customers face is slow data retrieval. Customers may experience delays in retrieving call data due to complex search queries within Splunk Enterprise Security. These queries can sometimes take up to an hour and a half to execute. Our architecture incorporates optimized query strategies and customization options to significantly reduce data retrieval times. This enables faster access to both hot and cold data.
Another challenge is scalability constraints. Traditional solutions may have limitations in scaling to accommodate increasing data volumes. This can be a significant concern for customers who anticipate future growth. Our certified architecture is designed for easy and flexible scalability. It allows customers to seamlessly scale their infrastructure based on their evolving needs, without encountering the limitations often faced with other vendors' solutions.
The final challenge is complex sizing and management. Traditional solutions often require extensive hardware configuration and sizing expertise, which can be a challenge for many organizations. This reliance on hardware expertise can hinder scalability and adaptability. Our architecture focuses on software and application administration, minimizing the dependence on specific hardware configurations. This simplifies deployment and ongoing management, making it more accessible to organizations with varying levels of technical expertise.
Our architecture leverages Splunk's native deployment features, including:
Index and bucket configuration. Data is categorized into hot, warm, and cold buckets for efficient storage and retrieval. Active/passive or active/active clustering. This ensures high availability and redundancy for critical data. Resource allocation. Data, compute, and memory resources are distributed evenly across clusters for optimal performance.
For high-volume data ingestion exceeding 8 terabytes per day, we recommend deploying critical components on dedicated physical hardware rather than virtual machines. Virtualization can introduce overhead and latency, potentially impacting performance. Utilizing physical hardware for these components can help mitigate these bottlenecks and ensure optimal performance for large data volumes.
How has it helped my organization?
Splunk Enterprise Security provides visibility across multiple environments. IT leaders and management directors often seek a simplified monitoring tool that can handle everything. However, using a third-party tool or a monitoring tool for multiple environments comes with certain considerations. These may include software version upgrades, connector updates, or API integrations for collecting specific metrics beyond the usual ten. Therefore, the key factors for a customer choosing a monitoring solution are, how easily can the tool integrate with existing physical, virtual, microservices, or hyper-scaler environments, whether it can provide a centralized view of monitoring data across multiple environments, and whether it can integrate with existing data analytics tools like Cloudera, Starburst, or Teradata. Integrating a monitoring solution with data analytics is crucial for a complete picture. While a standalone monitoring solution can help with capacity planning, data analytics provides insights for code analysis and historical data. This allows management to plan budgets, reduce costs, and make informed decisions for the future. Combining a monitoring tool like Splunk Enterprise Security with a data analytics engine like Cloudera or Teradata maximizes the value of data and empowers better decision-making.
Our monitoring tools offer various functionalities, including detection and third-party integration. For example, we have an integration with TigerGuard, a platform for threat detection. Additionally, we provide robust auditing capabilities to track changes within the environment. This helps identify potential intrusions and suspicious activity, whether from internal or external actors. To ensure the security of our monitoring tools, we implement several prevention and protection mechanisms. This includes continuous monitoring of logs and audits, even in case of tool failure. Leading enterprise monitoring solutions often connect to dedicated audit servers via SNMP traps, providing a centralized view of all infrastructure changes. This allows administrators, like Splunk users, to easily track modifications and identify potential security risks. Furthermore, individual software products within our monitoring suite have their access control lists and security measures. These may include features like certificates, user authentication, and security manager integration. Additionally, some products offer optional plugins or add-on licenses to enhance their auditing capabilities and meet specific organizational security requirements. Security is a complex and multifaceted topic, encompassing various aspects. This includes data location, user activity monitoring, intrusion prevention, and incident recovery procedures. Addressing these concerns effectively requires a comprehensive security platform assessment that evaluates the entire system, from hardware to applications, ensuring data integrity, encryption, and overall security at every layer.
Threat intelligence management utilizes dedicated tools for both threat and incident management. These tools help organizations define their response plan in case of an event, including how to recover, what the RTO and RPO are, and how to achieve them. This ensures the organization can recover quickly and efficiently in the event of a failure, unauthorized access, or data deletion. While threat incident management strategies may vary depending on the customer, the banking sector typically undergoes rigorous threat management inspections. While I may not be a threat management expert, there are crucial security measures to consider, encompassing personnel training, hardware security, and application controls. These elements, when orchestrated harmoniously, contribute to a secure environment that minimizes the risk of breaches, facilitates successful audits, and ensures data integrity.
The effectiveness of the threat intelligence management feature depends on how the customer responds to various threats, such as ransomware or network intrusions. While the tool provides recommendations, it requires customization to align with each organization's unique categorization criteria like high, medium, low, and specific security objectives. Ultimately, the goal is to protect data, enhance security, and ensure effective incident response procedures. Deploying the threat intelligence tool necessitates customization for each customer. Default settings may not be optimal, as human intervention might be necessary to address potential software errors or inaccurate recommendations. In such cases, manual intervention might be more effective. Therefore, the tool's usefulness depends on the specific threat, its recommendations, and the organization's response approach.
Splunk Enterprise Security is a powerful tool for analyzing malicious activity and detecting breaches. However, its effectiveness depends heavily on proper configuration and skilled administration. They must be able to connect Splunk with the necessary parameters, collect logs daily, and analyze them effectively. They must also have the ability to query Splunk efficiently to gather relevant data, an understanding of use cases and how to integrate with other systems securely, customization of the environment to meet specific needs, including adding connectors and add-ons, and visualization of data in a way that is clear and actionable for both analysts and management. While Splunk is a valuable platform, it requires careful management and expertise to unlock its full potential. Companies deploying Splunk should invest in skilled administrators to ensure its effectiveness in securing their environment.
Splunk helps us detect threats faster. As a Splunk administrator, I can monitor for suspicious activity, such as sudden changes in behavior, high resource utilization on specific file shares, or unusual data transfers. These events can trigger questions, like, Why is the system experiencing high utilization, is data leaking and being transferred elsewhere, why is this application consuming excessive resources, why has data suddenly disappeared from the system? Splunk Enterprise Security provides valuable insights and helps identify potential security issues. However, integrating threat intelligence management with Splunk can further automate this process. When suspicious activity is detected, the system can automatically take predefined actions. However, these actions require customization and testing before implementation. This may involve, customer review and approval of the automated response, POC testing to validate the effectiveness of the response, regular monitoring of the system's behavior and response to threat intelligence, and fine-tuning or customization of Splunk Enterprise Security settings to optimize threat detection and response.
Splunk has been beneficial to our organization from a partnership perspective. Even after Dell's acquisition of Cisco, our strong collaboration and certified solutions continue. This partnership strengthens our position with customers seeking the best solutions on the right platform. For example, if a customer requires a Splunk solution and a competitor lacks certified solutions, it could hinder their trust and purchasing decision. In contrast, our close collaboration with Splunk and certified add-ons for Splunk Enterprise Security adds value. We possess expertise in various Splunk architectures. I've worked with over four banks in Saudi Arabia alone that utilize Splunk and Dell hardware. Globally, we cater to diverse Splunk architectures and platforms, ensuring customer satisfaction with our diverse technology expertise. While we acknowledge competition, the focus here is on how our partnership with Splunk enhances the integration experience, offering both tightly coupled and loosely coupled architectures.
Since implementing Splunk Enterprise Security, we've observed improvements in both stability and the accuracy of data visualization. The low latency allows us to efficiently query the extensive data it provides. Splunk goes beyond collecting basic metrics like CPU or memory utilization; it comprehensively gathers data from various sources, including networks, applications, and virtualization. This unified platform eliminates the need for siloed solutions and enhances the capabilities of existing engineering software. While Splunk is a popular choice for data analysis due to its powerful features, its pricing structure based on daily data ingestion can be expensive. This pricing model, however, allows them to accurately charge based on resource usage. It's important to consider your data collection and visualization needs to determine the appropriate licensing tier. While other monitoring tools might share similar pricing models, Splunk distinguishes itself through its data segregation across various components. This simplifies communication between indexes, forwarders, and searches, allowing for efficient data processing within a single platform. Additionally, Splunk excels in data visualization and analytics, making it a leading choice for security and observability solutions. Their recent top ranking in Gartner's observability category further emphasizes their strengths. This recognition stems from their platform's compatibility with diverse hardware vendors, exceptional data visualization capabilities, and innovative data segregation strategies. Splunk's tiered access control and efficient cold/frozen data storage further enhance its value proposition. Ultimately, Splunk empowers users to interact with their data effectively. This valuable asset, when properly understood and visualized, can provide actionable insights without impacting network or application performance. Moreover, Splunk's customization and implementation potential extend beyond data analysis, offering recommendations and threat intelligence for proactive security measures. In conclusion, while Splunk's pricing might initially appear expensive, its comprehensive features and capabilities justify its cost for organizations seeking advanced data analysis and security solutions.
Realizing the full benefits of Splunk Enterprise Security takes time. While the software itself can be deployed quickly, it requires historical data to function effectively. This means collecting data for some time before you can rely on it for accurate insights. Several factors contribute to the time it takes to see value. First, there is deployment and customization. Setting up Splunk involves hardware, software, and integration, which can be time-consuming, especially during the first year. The second is data collection. Building a historical data set takes time, and the initial period may not provide significant value. There is also customization and training. Tailoring reports and training users requires additional investment, potentially involving workshops and professional services. To expedite the process, Splunk offers various resources including, proof of concept which allows testing Splunk with a limited data set for a specific period. Splunk may offer temporary free licenses for small workloads to facilitate initial evaluation. Splunk provides educational resources to help customers understand and utilize the platform effectively. Additionally, some partners leverage their Splunk expertise to help customers. Partners can educate and guide customers through the process, streamlining their experience, and assist with customizing reports and training users, accelerating the value realization process. By understanding these factors and leveraging available resources, organizations can optimize their Splunk implementation and achieve its full potential within a reasonable timeframe.
What is most valuable?
Splunk has been recognized by Gartner as a leader in providing visibility for observability and monitoring across various platforms, including physical, virtual, and container environments, for several years. This has made it a popular choice for many organizations, including those in the banking industry. Currently, only one of our banks utilizes QRadar. This may be due to the cost associated with switching from Splunk, which can be expensive. As a result, the customer might be prioritizing financial considerations over functionality at this time. It's important to note that while Splunk is recognized as a leader in platform capabilities, the decision to use a specific solution should ultimately be based on both functionality and cost considerations. This is why we have established a joint engineering team with Splunk to develop a platform that meets the needs of our customers.
What needs improvement?
I'd like a dashboard that allows me to connect elements through drag-and-drop functionality. Additionally, I want the ability to view the automatically generated queries behind the scenes, including recommendations for optimization. This is just a preliminary idea, but I envision the possibility of using intelligent software to further customize my queries. For example, imagine I could train my queries to be more specific through an AI-powered interface. This would allow me to perform complex searches efficiently. For instance, an initial search might take an hour and a half, but by refining the parameters through drag-and-drop and AI suggestions, I could achieve the same result in just five minutes. Overall, I'm interested in exploring ways to customize queries for faster and more efficient data retrieval. Ideally, the dashboard would provide additional guidance and suggestions to further enhance my workflow through customization and optimization.
For how long have I used the solution?
I have been using Splunk APM for four years.
What do I think about the stability of the solution?
The stability of Splunk Enterprise Security depends on how data is tiered. Splunk recommends different storage options based on data access frequency and volume. Hot and warm data: This data is accessed frequently and requires fast storage like SSD or NVMe. Cold and frozen data: This data is accessed less often and can be stored on cheaper options like Nearline, SaaS, NAS, or object storage.
Splunk prioritizes cost-effectiveness and recommends low-tier storage for cold and frozen data, which typically makes up the majority of customer data. This reduces costs compared to expensive SAN storage. However, the decision ultimately depends on the customer's budget and specific needs. Customers with limited budgets or small use cases might choose to store all data on a single platform initially and expand to dedicated cold and frozen storage later. This approach requires manual configuration changes e.g., modifying index.conf and forwarder configurations to redirect data to the new tier.
While Splunk recommends optimal tier configurations for hot, warm, cold, and frozen data, the final decision rests with the customer based on their budget and specific requirements.
Changes to storage tiers can be implemented later through configuration adjustments, but this process might be more complex than using dedicated storage from the beginning.
Overall, Splunk guides the best tier options for different data access patterns while acknowledging the customer's autonomy in making the final storage decision.
What do I think about the scalability of the solution?
We have ambitious expansion plans. As our customer base grows, we see significant increases in data ingestion. For example, one of our largest Splunk customers has increased its daily data ingestion from two terabytes to eight terabytes in just three years. This expansion benefits both the customer and Splunk. The customer gains valuable insights from the additional data, while Splunk increases its revenue through additional license sales. However, it's important to note that expanding data ingestion requires careful consideration of hardware limitations. Increasing data volume necessitates adding more forwarders, indexes, and searches, which can impact Splunk licensing requirements. This highlights the crucial need for comprehensive planning and resource allocation during expansion initiatives. Furthermore, we have observed instances where customers unintentionally exceed their licensed data ingestion capacity. For example, one bank was ingesting six terabytes of data per day while only holding a four-terabyte license. This underscores the importance of close monitoring and proactive license management to ensure compliance and avoid potential licensing issues.
Which solution did I use previously and why did I switch?
Our expertise extends beyond Splunk. We offer certified architectures for other SIEM solutions like IBM QRadar, catering to diverse customer requirements. However, IBM QRadar does not have as wide of a platform as Splunk Enterprise Security.
What's my experience with pricing, setup cost, and licensing?
Splunk can be expensive, as its licensing is based on the daily data ingestion volume. While we've observed numerous implementations, most are executed remotely by Splunk itself. However, if on-site assistance from a Splunk engineer is desired, it can be costly due to travel expenses from either the Dubai office or Europe. To address this, Splunk is exploring partnerships to offer implementation services at more accessible price points.
For someone evaluating SIEM solutions and prioritizing cost, traditional marketing materials may not be the most effective approach. Customers in Saudi Arabia, like many others, often appreciate tangible demonstrations of value. Therefore, consider offering a POC to showcase Splunk's capabilities in their specific environment. Investing in the customer through various strategies can demonstrate your commitment and build trust. Granting temporary access allows them to experiment with Splunk firsthand. Provide resources and support to help them learn and utilize the platform effectively. Leverage your partner network to offer additional training and expertise. Invite key decision-makers to exclusive events or meetings with Splunk leadership, fostering a deeper connection and understanding. Remember, success often hinges on addressing specific needs. While POCs and business cases are crucial, consider potential customization requirements and existing workflows. If they've used a different tool for years, transitioning may require additional support and training due to established user familiarity. Splunk's investment in the customer journey goes beyond initial acquisition. By offering POCs, temporary licenses, training, and even exclusive experiences, you demonstrate value and commitment, ultimately fostering long-term success. Demand generation, in essence, boils down to two key aspects, Identifying their specific requirements and desired outcomes, and recognizing that different customers have varying budgets, experience levels, learning curves, expectations, and decision-making processes. While some customers may be more challenging to persuade, others readily embrace the extra mile. Enterprise clients often fall into the latter category due to their greater flexibility in resource allocation, dedicated security operations teams, and ability to invest in necessary hardware. Remember, SIEM solutions often involve hardware considerations beyond just software, so understanding these additional costs is crucial for accurate solution sizing and customer budgeting.
Which other solutions did I evaluate?
Several competitors to Splunk exist in the market, including IBM QRadar, AppDynamics which is used by some customers for monitoring and security, and Micro Focus used for enterprise monitoring, incident reporting, and capacity planning. While Dynatrace is a leader in the field, its presence in the banking sector, particularly in Saudi Arabia, seems limited, perhaps due to having only one certified partner acting as its distributor. In contrast, Splunk boasts a wider network of partners who actively implement and enable customers, leading to its increased market prevalence.
What other advice do I have?
I would rate Splunk APM a nine out of ten.
Monitoring multiple hyperscalers with a single tool can be challenging. While some tools like VMware CloudHealth offer limited cross-platform capabilities, they often focus on specific aspects like virtual instances and storage. For comprehensive cloud monitoring across different hyperscalers like Azure and AWS, third-party solutions are typically necessary. Here at Dell, for example, we focus on monitoring tools for our own workloads and installed base, allowing integration with third-party solutions for cloud environments. This enables customers with workloads across multiple hyperscalers to leverage established enterprise monitoring tools like New Relic, AppDynamics (Cisco), Micro Focus (HP), and Splunk for unified visibility. Ultimately, choosing a solution often involves balancing operational and capital expenditures. By employing third-party tools, organizations can achieve comprehensive monitoring across various cloud environments while potentially reducing overall costs.
We offer various deployment options for Splunk to cater to diverse customer needs and regulations. We can deploy Splunk on various infrastructures, including hyper-converged, bare-metal, two-tier, and three-tier architectures. While cloud deployment is an option, regulations from the Saudi Central Bank restrict customer data storage outside the kingdom. Therefore, most of our customers in the financial sector opt for private or local cloud solutions. While a dedicated private cloud experience for Splunk isn't currently available, customers are seeking access to features like the SmartStore, a caching tier that is now bundled with the Enterprise Security license previously offered separately from version 7X onwards. The chosen deployment approach depends on factors like budget, customer expectations, performance requirements, and compatibility with Splunk's recommended sizing solutions. We utilize both internal sizing tools and Splunk's official tools to ensure proper resource allocation for indexers, search heads, and forwarders based on specific customer needs. We have deployed our Dell servers, storage, and data protection solutions. Additionally, we have implemented a reference architecture. From a hardware perspective, we have everything in place to support Splunk as a reference architecture. This is indisputable, as it reflects our current infrastructure.
I have one customer who uses Splunk on a single site. In contrast, other customers have deployed Splunk in an active-active cluster configuration across two sites, effectively segregating the data across the environments with two-factor authentication. For these other environments, I have observed that each customer has a unique monitoring perspective or performance requirement, reflected in their individual subscriptions.
Splunk is responsible for software maintenance, while we handle the hardware aspects.
Splunk Enterprise Security is one of the most mature security solutions available. While it is expensive, it offers good value by providing the necessary security measurements, monitoring, and auditing capabilities required for running an enterprise environment.
The combined forces of Splunk and Dell create significant resilience for us. Our joint architecture, strong alignment between the Dell account team and Splunk sales and presales, and collaborative efforts have been instrumental in addressing specific customer needs, such as sizing. This collaboration is mutually beneficial: Splunk focuses on selling licenses, while Dell prioritizes hardware sales. Unlike Cloudera, which optimizes licenses for its platform, Splunk bases licensing on the ingestion rate, demonstrating its alignment with our advanced architecture. This creates a win-win situation for both companies.
Which deployment model are you using for this solution?
Is easy to use, provides great visibility, and reduces our resolution time
What is our primary use case?
We use Splunk Infrastructure Monitoring to monitor our hybrid infrastructure.
We implemented Splunk Infrastructure Monitoring to help us monitor our infrastructure as we scale.
How has it helped my organization?
Splunk Infrastructure Monitoring is easy to use. It helps us quickly analyze how our infrastructure is performing across various services.
It helps with proper log management, allowing us to monitor our systems and analyze log data regularly. It also provides security operations capabilities for monitoring system health and ensuring uptime. We noticed these benefits immediately.
Our operational efficiency has been increased. It has improved our system health by monitoring the performance of data on servers, virtual machines, and containers, along with overall background processes.
Splunk Infrastructure Monitoring provides end-to-end visibility into our cloud-native environment. This is crucial because any data corruption can impact all the information we've deployed. It also aids in log management, offering parameters that extend its functionality as a comprehensive monitoring tool for CPU, memory usage, and network traffic.
It has helped reduce our mean time to detect by four hours.Our mean time to resolution has been reduced by two hours. By providing access to all our network parameters, it simplifies log ingestion through streamlined calculations.
Splunk Infrastructure Monitoring provides us with faster and more comprehensive insights into our infrastructure, allowing us to focus on critical business initiatives.
We saw the time to value immediately after deploying Splunk Infrastructure Monitoring.
What is most valuable?
The data collection from our VMs, containers, databases, and backend components is valuable.
What needs improvement?
Splunk Infrastructure Monitoring's data analytics can be improved by including suggestions for various types of continuous monitoring.
For how long have I used the solution?
I have been using Splunk Infrastructure Monitoring for three years.
What do I think about the stability of the solution?
The network uptime and monitoring are great.
What do I think about the scalability of the solution?
The scalability of Splunk Infrastructure Monitoring is excellent.
How are customer service and support?
The technical support is good.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
We previously used Datadog but it doesn't offer network monitoring features like CPU utilization or overall server performance, which Splunk Infrastructure Monitoring does, so we switched.
Splunk Infrastructure Monitoring offers more functionality and visibility, making it a better choice for handling cloud architecture compared to Datadog.
How was the initial setup?
The initial setup was straightforward. One person was required for the deployment.
What other advice do I have?
I would rate Splunk Infrastructure Monitoring 9 out of 10.
Splunk Infrastructure Monitoring offers automated, continuous monitoring and diagnostics, delivering real-time reports for all your data with enhanced functionality compared to other solutions.
We have 200 users of Splunk Infrastructure Monitoring.
Splunk Infrastructure Monitoring is the best solution for monitoring networks, parameters, CPU, memory usage, and network traffic cases.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Offers end-to-end visibility, real-time monitoring, and distributed tracing, enabling organizations to optimize application performance and troubleshoot issues efficiently
What is our primary use case?
I use it for monitoring and troubleshooting the performance of cloud-native applications.
How has it helped my organization?
Providing comprehensive visibility throughout the environment, it monitors my system, enhances career performance, and offers insights into the user experience.
Troubleshooting and visualizing a cloud-native environment is made easy with Splunk APM. It provides complete visibility into software tools, swiftly monitoring business performance and applications.
It possesses the capability to conduct distributed tracing within our environment. This includes monitoring the speed of tracked access, extending from end users to the Internet, system, and network services, and supporting my software application. Consequently, it offers an end-to-end overview of potential bottlenecks.
Splunk APM has significantly enhanced our organizational efficiency. Initially, my responsibilities included tracking website application performance, managing applications, and handling license releases. Now, it provides real-time user monitoring, transforming the way I handle these tasks.
It significantly impacts our organization's telemetry data, improving operational performance and user experience. The platform provides insights into application performance and effective log management. Ensuring accurate tracking of all performance-related logs contributes to building up the application performance percentage with comprehensive data.
It contributed to a daily reduction of six hours in our mean time to resolve.
What is most valuable?
The most valuable features are troubleshooting and optimizing application performance.
Another value lies in the resilience and quick recovery capabilities offered by the SIEM. It enables thorough monitoring across our landscape, providing insights into the number of running software applications. The tool furnishes comprehensive information across microservices, significantly enhancing our proficiency.
What needs improvement?
Enhancing system availability and optimizing service performance are crucial. It is essential for the monitoring tool to deliver quick response times when generating analytical reports, instead of prolonged delays.
For how long have I used the solution?
I have been using it for two years.
What do I think about the stability of the solution?
It provides good stability capabilities.
What do I think about the scalability of the solution?
It has the capacity to scale. There are approximately two hundred users and one administrator that use it.
How are customer service and support?
I would rate its customer service and support eight out of ten.
How would you rate customer service and support?
Positive
How was the initial setup?
The initial setup was straightforward.
What about the implementation team?
The deployment process took six hours. During this time, a clear understanding was established regarding which technical applications—whether cloud-based, native, or others—needed monitoring and improved performance. These categories were identified in-house, with two individuals overseeing the process.
What was our ROI?
It allowed our IT staff to focus on other projects by freeing up their time. In total, it saved around four hours.
Which other solutions did I evaluate?
We evaluated Grafana.
What other advice do I have?
It can serve as an analytical application for enhancing performance, ensuring all dependencies are effectively addressed. Overall, I would rate it eight out of ten.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Enables users to forward logs to a centralized location and intuitive dashboard functionality
What is our primary use case?
I use Splunk primarily from a gateway operations perspective. I work on application support. As part of that support, we regularly monitor the application dashboards built in Splunk using the logs. I covered this earlier this month.
How has it helped my organization?
The real problem we were facing was that we were unable to get all of our logs into a single place. We have an on-premise application with multiple servers across different data centers, and we needed to be able to view all of the logs together in order to troubleshoot any problems. That's why we started using Splunk to forward all of our logs to a single location.
Moreover, Splunk APM gives us end-to-end visibility across our entire on-premise environment.
Another biggest benefit I've seen is the ability to quickly identify problems using Splunk alerting. We set up alerts against our application metrics, and this has helped us to resolve major issues much sooner. We can now identify problems as soon as they occur, which gives us time to take corrective action before they impact our users.
Splunk has reduced the amount of time our operations team spends investigating problems. This has freed up our engineers to focus on other tasks, such as improving our application performance and adding new features.
What is most valuable?
I like the fact that Splunk APM makes it easy to connect to the application database and run queries against the data. I also like the fact that Splunk APM allows me to use log forwarders to forward logs to a central location, where I can then build dashboards to view the data. The dashboards are probably my favorite feature of Splunk APM.
What needs improvement?
I've been using the Splunk query language, and it can be a bit time-consuming to set up the queries I need. I've had to look at a lot of community forums to find the filters I need, and it can be difficult to get the details I need.
For how long have I used the solution?
I have experience building dashboards and other things with Splunk APM.
I've been using Splunk APM for over a year now. As part of my job in application support, I regularly create and maintain dashboards for our applications using Splunk APM. I also use dashboards to create alerts based on certain metrics.
Moreover, I'm currently working on a project to create a new dashboard for our customer support application.
What do I think about the stability of the solution?
The stability of the solution is good because I have never had outages I have seen so far. In terms of usage, it's good in terms of availability.
How are customer service and support?
I haven't had to contact the support yet. We have a separate team that maintains and builds our relationship with Splunk, so they would be the ones to contact if we had any issues.
What about the implementation team?
The solution doesn't require any maintenance.
Which other solutions did I evaluate?
We used New Relic and AppDynamics before Splunk. AppDynamics was our APM tool, and I'm still using New Relic for monitoring Splunk. New Relic is great for log monitoring, and it's our main tool for internal application monitoring.
What other advice do I have?
With Splunk APM as an enterprise solution, various factors come into play. Right now, considerations include pricing and how they envision the solution to work for them. Some might want the solution to be cloud-based. It largely depends on the volumes they anticipate. Organizations must decide how much they're willing to invest, especially when comparing it to other investments they've made. With the current economic recession and organizations looking to cut costs, it's crucial to evaluate the volumes and aspects of Splunk that are most relevant to them.
Overall, I would rate the solution an eight out of ten.