
Overview
Cribl Product Overview
How telemetry data was managed over the last 10 years will not work for the next 10. Cribl is purpose built to meet the unique challenges IT and Security teams face.
Cribl.Cloud is the easiest way to try Cribl products in the cloud through a unified platform. Cribls suite of products gives flexibility and control back to customers. With routing, shaping, enriching, and search functionalities that make data more manageable, you can easily clean up your data, get it where it needs to be, work more efficiently, and ultimately gain the control and confidence needed to be successful.
Cribl Cloud suite of products includes:
Stream: A highly scalable data router for data collection, reduction, enrichment, and routing of observability data.
Edge: An intelligent, scalable edge-based data collection system for logs, metrics, and application data.
Lake: Storage that does not lock data in. Cribl Lake is a turnkey data lake makes it easy and economical to store, access, replay, and analyze data no expertise needed.
Search: A search feature to perform federated search-in-place queries on any data, in any form.
Getting Started
When you purchase your Cribl.Cloud subscription directly from the AWS Marketplace, you can experience a smooth billing process that you're already familiar with, without needing to set up a separate procurement plan to use Cribl products. Track billing and usage directly in Cribl.Cloud.
Enjoy a quick and easy purchasing experience by utilizing your existing spend commitments through the AWS Enterprise Discount Program (EDP) to subscribe to Cribl.Cloud. Get flexible pricing and terms by purchasing through a private offer. Purchase the Cribl Cloud Suite of offerings at a pre-negotiated price. Contact awsmp@cribl.io or a sales representative for flexible pricing for 12/24/36-month terms.
We are available in US-West-2 (Oregon), US-East-2 (Ohio), US-East-1 (Virginia), CA-Central-1 (Canada Central), EU-West-2 (London), EU-Central-1 (Frankfurt), and AP-Southeast-2 (Sydney) with more regions coming soon! Regional pricing will apply.
To learn more about pricing and the consumption pricing philosophy, please visit: Cribl Pricing - https://cribl.io/cribl-pricing/ Cribl.Cloud Simplified with Consumption Pricing Blog - https://cribl.io/blog/cribl-cloud-consumption-pricing/
Highlights
- Fast and easy onboarding - With zero-touch deployment, you can quickly start using Cribl products without the hassle, burden, and cost of managing infrastructure.
- Instant scalability - The cloud provides flexibility to easily scale up or down to meet changing business needs and dynamic data demands.
- Trusted security - Cribl knows how important protecting data is, and built all Cribl products and services from the ground up with security as the top priority. Cribl.Cloud is SOC 2 compliant, ensuring all your data is protected and secure. Cribl.Cloud is currently In Process for FedRAMP IL4.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Security credentials achieved
(3)



Buyer guide

Financing for AWS Marketplace purchases
Quick Launch
Pricing
Free trial
Dimension | Description | Cost/12 months |
|---|---|---|
Cribl.Cloud Free | Cribl.Cloud Suite Free Tier | $0.00 |
Cribl.Cloud Enterprise | Cribl.Cloud Suite Enterprise with 1TB Daily ingestion | $142,800.00 |
The following dimensions are not included in the contract terms, which will be charged based on your usage.
Dimension | Cost/unit |
|---|---|
Overage Fees | $0.01 |
Vendor refund policy
Cribl will refund prior payments attributable to the unused remainder of your purchase.
Custom pricing options
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Software as a Service (SaaS)
SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.
Additional details
Usage instructions
Cribl Cloud Trust IAM Role CloudFormation Template
This CloudFormation template creates an IAM role that allows Cribl Cloud to access specific AWS resources in your account. The role is designed to provide Cribl Cloud with the necessary permissions to interact with S3 buckets and SQS queues.
Template Overview
The template does the following:
- Creates an IAM role named CriblTrustCloud
- Configures a trust relationship with Cribl Cloud's AWS account
- Attaches a policy that grants access to S3 and SQS resources
- Outputs the role name, ARN, and an external ID for authentication
Parameters
- CriblCloudAccountID: The AWS account ID of Cribl Cloud (default: '012345678910')
IAM Role Details
Trust Relationship
The role trusts two specific roles in the Cribl Cloud account:
- arn:aws:iam::{CriblCloudAccountID}:role/search-exec-main
- arn:aws:iam::{CriblCloudAccountID}:role/main-default
These roles can assume the CriblTrustCloud role using the sts:AssumeRole, sts:TagSession, and sts:SetSourceIdentity actions.
Permissions
The role has a policy named CriblCloudS3SQSPolicy that grants the following permissions:
- S3 access:
- List buckets
- Get and put objects
- Get bucket location
- SQS access:
- Receive and delete messages
- Change message visibility
- Get queue attributes and URL
These permissions apply to all S3 buckets and SQS queues in the account.
Security Feature
The template includes a security feature that requires an external ID for authentication. This external ID is derived from the CloudFormation stack ID, providing an additional layer of security when assuming the role.
Outputs
The template provides three outputs:
- RoleName: The name of the created IAM role
- RoleArn: The ARN of the created role
- ExternalId: The external ID required for authentication when assuming the role
Usage
To use this template:
- Deploy it in your AWS account using CloudFormation
- Provide the resulting role ARN and external ID to Cribl Cloud
- Cribl Cloud can then assume this role to access your S3 and SQS resources
Remember to review and adjust the permissions as necessary to align with your security requirements and the specific needs of your Cribl Cloud integration1 2 3 .
<div style="text-align: center">⁂</div>Enable CloudTrail and VPC Flow Logging for Cribl Cloud
This document explains the resources that will be created when deploying the provided CloudFormation template. The template is designed to create an IAM role that trusts Cribl Cloud and sets up CloudTrail and VPC Flow logging to an S3 bucket.
Template Overview
The template automates the creation of AWS resources to enable centralized logging, specifically focusing on CloudTrail logs and VPC Flow Logs. It creates S3 buckets for storing these logs, SQS queues for triggering processes upon log arrival, and an IAM role to allow Cribl Cloud to access these logs.
Resources Created
Here's a breakdown of the resources defined in the CloudFormation template:
-
CriblCTQueue (AWS::SQS::Queue): Creates an SQS queue named according to the CTSQS parameter (default: cribl-cloudtrail-sqs). This queue will be used to trigger actions when new CloudTrail logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref CTSQS - Sets the queue name to the value of the CTSQS parameter.
- Properties:
-
CriblCTQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblCTQueue, allowing s3.amazonaws.com to send messages to the queue. The policy includes a condition that the source account must match the AWS account ID in which the stack is deployed. This ensures only S3 events from the current AWS account can trigger the queue.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow - Allows actions specified in the policy.
- Principal: Service: s3.amazonaws.com - Specifies the service that can perform the actions.
- Action: SQS:SendMessage - Allows sending messages to the queue.
- Resource: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue.
- Condition:
- StringEquals: 'aws:SourceAccount': !Ref AWS::AccountId - Restricts the source account to the account where the stack is deployed.
- Statement:
- Queues: !Ref CTSQS - Associates the policy with the SQS queue.
- PolicyDocument:
- Properties:
-
TrailBucket (AWS::S3::Bucket): Creates an S3 bucket used to store CloudTrail logs. The bucket is configured with a NotificationConfiguration that sends an event to the CriblCTQueue when a new object is created (specifically, a PUT operation). This will trigger processing when new CloudTrail logs are available.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put - Specifies that the notification should be triggered when an object is created using a PUT operation.
- Queue: !GetAtt CriblCTQueue.Arn - The ARN of the SQS queue to send the notification to.
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblCTQueuePolicy - Ensures that the queue policy is created before the bucket.
- Properties:
-
TrailBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the TrailBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket, ensuring proper log delivery. It requires bucket-owner-full-control ACL.
-
cloudtrail.amazonaws.com: Allows CloudTrail to get the bucket ACL and put objects into the bucket. It also requires bucket-owner-full-control ACL.
-
A Deny statement that enforces the use of SSL for all requests to the bucket, enhancing security.
-
Properties:
- Bucket: !Ref TrailBucket - The name of the S3 bucket.
- PolicyDocument:
- Version: 2012-10-17 - The version of the policy document.
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow - Allows the action specified.
- Principal: Service: delivery.logs.amazonaws.com - The AWS Logs service principal.
- Action: s3:PutObject - Allows putting objects into the bucket.
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/' - The S3 bucket and prefix to allow the action on.
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control - Requires the bucket-owner-full-control ACL.
- Sid: AWSCloudTrailAclCheck
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !Sub '${TrailBucket.Arn}'
- Sid: AWSCloudTrailWrite
- Effect: Allow
- Principal: Service: cloudtrail.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${TrailBucket.Arn}/AWSLogs/*/*'
- Condition: StringEquals: 's3:x-amz-acl': 'bucket-owner-full-control'
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: * - Applies to all principals.
- Action: s3:* - Denies all S3 actions.
- Resource:
- !GetAtt TrailBucket.Arn
- !Sub '${TrailBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false - Denies requests that are not using SSL.
- Sid: AWSLogDeliveryWrite
-
-
ExternalTrail (AWS::CloudTrail::Trail): Creates a CloudTrail trail. It is configured to:
-
Store logs in the TrailBucket.
-
Include global service events.
-
Enable logging.
-
Create a multi-region trail.
-
Enable log file validation.
-
Properties:
- S3BucketName: !Ref TrailBucket - The name of the S3 bucket where the logs will be stored.
- IncludeGlobalServiceEvents: true - Includes global service events.
- IsLogging: true - Enables logging.
- IsMultiRegionTrail: true - Creates a multi-region trail.
- EnableLogFileValidation: true - Enables log file validation.
- TrailName: !Sub '${TrailBucket}-trail' - Sets the name of the trail.
-
DependsOn:
- TrailBucket
- TrailBucketPolicy
-
-
CriblVPCQueue (AWS::SQS::Queue): Creates an SQS queue named according to the VPCSQS parameter (default: cribl-vpc-sqs). This queue will be used to trigger actions when new VPC Flow Logs are written to the S3 bucket.
- Properties:
- QueueName: !Ref VPCSQS - Sets the queue name.
- Properties:
-
CriblVPCQueuePolicy (AWS::SQS::QueuePolicy): Defines the policy for the CriblVPCQueue, allowing s3.amazonaws.com to send messages to the queue. Similar to CriblCTQueuePolicy, it restricts access to events originating from the same AWS account.
- Properties:
- PolicyDocument:
- Statement:
- Effect: Allow
- Principal: Service: s3.amazonaws.com
- Action: SQS:SendMessage
- Resource: !GetAtt CriblVPCQueue.Arn
- Condition: StringEquals: 'aws:SourceAccount': !Ref "AWS::AccountId"
- Statement:
- Queues: !Ref VPCSQS
- PolicyDocument:
- Properties:
-
LogBucket (AWS::S3::Bucket): Creates an S3 bucket used to store VPC Flow Logs. The bucket is configured with a NotificationConfiguration to send an event to the CriblVPCQueue when new objects are created.
- Properties:
- NotificationConfiguration:
- QueueConfigurations:
- Event: s3:ObjectCreated:Put
- Queue: !GetAtt CriblVPCQueue.Arn
- QueueConfigurations:
- NotificationConfiguration:
- DependsOn: CriblVPCQueuePolicy
- Properties:
-
LogBucketPolicy (AWS::S3::BucketPolicy): Defines the policy for the LogBucket. This policy grants permissions to:
-
delivery.logs.amazonaws.com: Allows the AWS Logs service to write objects to the bucket. It requires bucket-owner-full-control ACL.
-
Allows delivery.logs.amazonaws.com to get the bucket ACL.
-
Enforces SSL for all requests to the bucket.
-
Properties:
- Bucket: !Ref LogBucket
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Sid: AWSLogDeliveryWrite
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:PutObject
- Resource: !Sub '${LogBucket.Arn}/AWSLogs/${AWS::AccountId}/*'
- Condition: StringEquals: 's3:x-amz-acl': bucket-owner-full-control
- Sid: AWSLogDeliveryAclCheck
- Effect: Allow
- Principal: Service: delivery.logs.amazonaws.com
- Action: s3:GetBucketAcl
- Resource: !GetAtt LogBucket.Arn
- Sid: AllowSSLRequestsOnly
- Effect: Deny
- Principal: *
- Action: s3:*
- Resource:
- !GetAtt LogBucket.Arn
- !Sub '${LogBucket.Arn}/*'
- Condition: Bool: 'aws:SecureTransport': false
- Sid: AWSLogDeliveryWrite
-
-
FlowLog (AWS::EC2::FlowLog): Creates a VPC Flow Log that captures network traffic information for the VPC specified in the VPCId parameter. The flow logs are stored in the LogBucket. The type of traffic to log is determined by the TrafficType parameter (ALL, ACCEPT, or REJECT).
- Properties:
- LogDestination: !Sub 'arn:${AWS::Partition}:s3:::${LogBucket}' - The ARN of the S3 bucket where the flow logs will be stored.
- LogDestinationType: s3 - Specifies that the destination is an S3 bucket.
- ResourceId: !Ref VPCId - The ID of the VPC to log.
- ResourceType: VPC - Specifies that the resource is a VPC.
- TrafficType: !Ref TrafficType - The type of traffic to log (ALL, ACCEPT, REJECT).
- Properties:
-
CriblTrustCloud (AWS::IAM::Role): Creates an IAM role that allows Cribl Cloud to access AWS resources.
- Properties:
- AssumeRolePolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Principal:
- AWS:
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/search-exec-main'
- !Sub 'arn:aws:iam::${CriblCloudAccountID}:role/main-default'
- AWS:
- Action:
- sts:AssumeRole
- sts:TagSession
- sts:SetSourceIdentity
- Condition:
- StringEquals: 'sts:ExternalId': !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
- Description: Role to provide access AWS resources from Cribl Cloud Trust
- Policies:
- PolicyName: SQS
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:GetQueueUrl
- Resource:
- !GetAtt CriblCTQueue.Arn
- !GetAtt CriblVPCQueue.Arn
- PolicyDocument:
- PolicyName: S3EmbeddedInlinePolicy
- PolicyDocument:
- Version: 2012-10-17
- Statement:
- Effect: Allow
- Action:
- s3:ListBucket
- s3:GetObject
- s3:PutObject
- s3:GetBucketLocation
- Resource:
- !Sub ${TrailBucket.Arn}
- !Sub ${TrailBucket.Arn}/*
- !Sub ${LogBucket.Arn}
- !Sub ${LogBucket.Arn}/*
- PolicyDocument:
- PolicyName: SQS
- AssumeRolePolicyDocument:
- Properties:
Parameters
The template utilizes parameters to allow customization during deployment:
- CriblCloudAccountID: The AWS account ID of the Cribl Cloud instance. This is required for the IAM role's trust relationship.
- Description: Cribl Cloud Trust AWS Account ID. Navigate to Cribl.Cloud, go to Workspace and click on Access. Find the Trust and copy the AWS Account ID found in the trust ARN.
- Type: String
- Default: '012345678910'
- CTSQS: The name of the SQS queue for CloudTrail logs.
- Description: Name of the SQS queue for CloudTrail to trigger for S3 log retrieval.
- Type: String
- Default: cribl-cloudtrail-sqs
- TrafficType: The type of traffic to log for VPC Flow Logs (ALL, ACCEPT, REJECT).
- Description: The type of traffic to log.
- Type: String
- Default: ALL
- AllowedValues: ACCEPT, REJECT, ALL
- VPCSQS: The name of the SQS queue for VPC Flow Logs.
- Description: Name of the SQS for VPCFlow Logs.
- Type: String
- Default: cribl-vpc-sqs
- VPCId: The ID of the VPC for which to enable flow logging.
- Description: Select your VPC to enable logging
- Type: AWS::EC2::VPC::Id
Outputs
The template defines outputs that provide key information about the created resources:
- CloudTrailS3Bucket: The ARN of the S3 bucket storing CloudTrail logs.
- Description: Amazon S3 Bucket for CloudTrail Events
- Value: !GetAtt TrailBucket.Arn
- VPCFlowLogsS3Bucket: The ARN of the S3 bucket storing VPC Flow Logs.
- Description: Amazon S3 Bucket for VPC Flow Logs
- Value: !GetAtt LogBucket.Arn
- RoleName: The name of the created IAM role.
- Description: Name of created IAM Role
- Value: !Ref CriblTrustCloud
- RoleArn: The ARN of the created IAM role.
- Description: Arn of created Role
- Value: !GetAtt CriblTrustCloud.Arn
- ExternalId: The external ID used for authentication when assuming the IAM role.
- Description: External Id for authentication
- Value: !Select - 4 - !Split - '-' - !Select - 2 - !Split - '/' - !Ref 'AWS::StackId'
Deployment Considerations
- Cribl Cloud Account ID: Ensure the CriblCloudAccountID parameter is set to the correct AWS account ID for your Cribl Cloud instance. This is crucial for establishing the trust relationship.
- S3 Bucket Names: S3 bucket names must be globally unique. If the template is deployed multiple times in the same region, you may need to adjust the names of the buckets. Consider using a Stack name prefix.
- VPC ID: The VPCId parameter should be set to the ID of the VPC for which you want to enable flow logging.
- Security: Regularly review and update IAM policies to adhere to the principle of least privilege. Consider using more restrictive S3 bucket policies if necessary.
- SQS Queue Configuration: Monitor the SQS queues for backlog and adjust the processing capacity accordingly.
- CloudTrail Configuration: Confirm that CloudTrail is properly configured to deliver logs to the designated S3 bucket.
- VPC Flow Log Configuration: Verify that VPC Flow Logs are correctly capturing network traffic.
- External ID: The External ID is a critical security measure for cross-account access. Make sure it's correctly configured in both AWS and Cribl Cloud.
This detailed explanation provides a comprehensive understanding of the resources created by the CloudFormation template, enabling informed deployment and management. Remember to adapt parameters to your specific environment and security requirements.
Footnotes
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
FedRAMP
GDPR
HIPAA
ISO/IEC 27001
PCI DSS
SOC 2 Type 2
Standard contract
Customer reviews
Data optimization has transformed log management and supports efficient long-term investigations
What is our primary use case?
What is most valuable?
The flexibility that Cribl provides allows us to manage the data and work with the data effectively.
Implementing Cribl has optimized the infrastructure that we have and is improving the optimization of the services that we are providing.
What needs improvement?
Other than the Cribl module that we are using, Cribl Search has several modules, so there is room to improve that capability in Cribl.
In Cribl Search, the language and the flexibility in querying the data can be improved because it is not as good as other solutions.
Cribl Search does not currently help search data in place for investigative issues or answer questions across our data stores at this moment because we are not using it at that level yet, but hopefully in the future.
I would advise others looking to implement Cribl that if they are evolving Cribl Search, it would be very interesting to see more capability, more flexibility, and more ways to share the data similar to Splunk.
For how long have I used the solution?
I have around three and a half years of experience working with Cribl.
What do I think about the stability of the solution?
Cribl's stability is an eight.
What do I think about the scalability of the solution?
For scalability, I would rate it a ten.
How are customer service and support?
I would rate the technical support as an eight.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
I would compare Cribl with other solutions or vendors as mature. We have seen another solution similar but not as mature as Cribl at the moment.
I am talking about the Data Stream Processor from Splunk and also Omnium from Spain.
How was the initial setup?
Cribl is easy to deploy; the team managing the deployment did not report any concerns about the complexity of the deployment of the solution.
The deployment is straightforward; it is just a matter of coordination with other teams, but everything was released in one day.
What other advice do I have?
Regarding the firewall logs with Cribl, the digression of the data that we are experiencing thanks to Cribl is amazing. Although I cannot provide exact numbers, the reduction is significant.
I use Cribl Stream , Cribl Lake, and Cribl Search. My experience with Cribl Search and Cribl Lake is just initial; we are just starting to use them. Cribl Stream is the optimization we are using right now in terms of data collection and data management and is more mature.
Cribl Search has changed my approach to long-term log retention and historical investigation.
I would rate this review an eight overall.
Data pipelines have reduced noisy logs and now support faster, cost-efficient investigations
What is our primary use case?
I have used Cribl for log volume reduction with SIEM tools including Splunk, Sentinel , and Elastic. The raw logs contained a lot of noise, and Cribl helped me filter unnecessary logs, drop low-value fields, reduce repetitive logs, and remove unused attributes. I achieved 40 to 80% reduction in existing volume, which resulted in faster searches and good cost savings.
Cribl helped me route the same log streams to multiple destinations based on conditions I wanted to implement. Firewall logs were sorted with error messages. Whenever I received firewall messages, different types of traffic were allowed or denied, and there were threats from malware, scans, IPS, VPN connections, and authentication failures. I added context to the logs that was useful for SOC teams, including geo-location based on asset owners and application names. Since firewall logs were highly verbose and expensive to ingest into the SIEMs, I used Cribl to parse and transform them into structured fields, enriching the geo and asset context. I also dropped noise from the traffic we received and routed only threat and deny logs to the SIEM while storing the rest in S3 for long-term analysis.
Whenever I received high volume log metrics, Cribl proved to be the best solution. Using Cribl, I processed millions of data per second from various sources including firewalls, Kubernetes clusters, cloud platforms, and Prometheus, which is one of the primary sources from which I receive data. Cribl efficiently handles high-volume logs and metrics through horizontal scaling, easy filtering, smart sampling, metric cardinality reduction, and tiered routing. This ensures performance, cost control, and reliable observability even at massive scale. I primarily worked on the scaling part, including auto-scaling, and I also used load balancers to balance the load between worker nodes and the leader node.
Cribl reduces data complexity by normalizing log formats, handling schemas, flattening nested data, and reducing high cardinality fields. I worked with instances where I had different JSON files and set cardinality fields including request ID, session ID, and pod UID. By applying conditional parsing, flattening JSON nesting files, and removing high cardinality fields, I simplified downstream analytics and reduced ingestion cost by almost 60%. In our projects, each team works on particular domains, and I was specifically working with load balancing, auto-scaling, and routing data to destinations. Cribl is one of the most reliable solutions I have worked with, and it has provided a user-friendly experience. Whenever I wanted to access data from years back to check for seasonality impact, Cribl helped me accomplish this. I believe that if this feature works well, the other features will also work seamlessly.
What is most valuable?
Cribl is one of the best data pipelining platforms, and with all the features that have been upgraded over the past three years, it has been seamless. Although it is on an expensive side compared to competitors such as Edge Delta and many other platforms, Cribl is one of the most secured solutions. When data passes through or when I store any data in hot tier, cold tier, or archive storage, it is very easy to determine which data to keep, and the data routing process is seamless when compared to other platforms.
Regarding the UI, depending on the configuration, the home screen shows me how the system's health is, including the ingestion rates and how events are working in per second. Throughput charts are available, and errors or warnings also pop up. The UI is well-organized for me. Whenever I log into Cribl UI, I directly go to the streams to classify the incoming logs and then create a pipeline using the drag-and-drop builder. I do not need to write full code because it has drag-and-drop functions. I choose functions such as Parse, Eval, Drop, and live events preview to test against sample events. Once this is done, I assign routes to destinations. The particular destinations I worked with include Splunk and Stream . Finally, I monitor the throughput, errors, and metrics dashboard and adjust as needed. Cribl follows a very systematic approach in the UI part, and it is a hassle-free solution for developers to work on.
I have not worked with Cribl Search very much, but I have worked extensively with Cribl Stream . From my certification, I remember that Cribl Search's Search-in-Place feature allows me to query data when it is already living. Without re-ingesting data into a SIEM, I can search it through Cribl dashboards. For example, I keep data in the SIEM for 7 to 14 days, for months or years in object storage. Cribl Search allows federated on-demand logs and metrics. When platforms can access data without ingesting it directly into the SIEM, I can directly use the on-demand function, and it is mainly used for cost-effective historical search or investigations that have already been done in past years. This Cribl Search feature helps me check seasonality impact, such as comparing last year's revenue percentage to this year's revenue. This helps me make better decisions about the market. Since my client is Microsoft and I ingest heavy amounts of data every day, Cribl has been handling this very well.
What needs improvement?
To improve Cribl, I would focus on comparing performance and architecture with other tools. High volume efficiency can be made more seamless, such as improving the identification of noisy sources via metrics and sampling repetitive logs. This feature already exists, but I am talking about how to make it more efficient. I will focus on the high volume data part, reducing data complexity, making performance metrics more visible, and the dashboard can be more interactive. Integration of AI tools can be much more helpful. I am pretty sure that the developers of Cribl have been working on that and an update will come soon with AI integration. However, I need to ensure that data is secured as much as possible because data security is non-negotiable for data engineers.
Cribl is a very interactive application for me and one of my favorite applications to work on. I hope to have more opportunities to work with Cribl. The cost part is very high compared to alternatives such as Edge Delta , which offers much cheaper prices. However, price comes with a cost, and speed and security come with a price.
Integrating AI is one of the most valuable improvements. It will most likely be Copilot because I do not think OpenAI will agree to integrate with Cribl, or Cloud may also come in, but I believe Copilot will be first. Integration of Copilot will be a big advantage for everyone. I would not need to run scripts or go back to documentation to check function syntax because there are many functions I need to use in day-to-day life, and it is very hard to remember every function syntax. When I integrate AI, it will directly help me get the functions. I just need to provide the prompt needed, extract the data from the Copilot chat, and use it in my day-to-day life. My overall review rating for Cribl is 9 out of 10.
For how long have I used the solution?
I have been working with Cribl for three years and two months.
What do I think about the stability of the solution?
I have faced only one or two instances with the login part, but it was due to maintenance. The Cribl platform was not accepting my credentials during that time, but it was resolved quickly. I have not come across any customer-facing issues, so I would not be able to provide additional details on that.
What do I think about the scalability of the solution?
Whenever I received high volume log metrics consistently, Cribl proved to have the best capabilities. Using Cribl, I processed millions of data per second from various sources including firewalls, Kubernetes clusters, cloud platforms, and Prometheus, which is one of the primary sources from which I receive data. Cribl efficiently handles high-volume logs and metrics through horizontal scaling, easy filtering, smart sampling, metric cardinality reduction, and tiered routing. This ensures performance, cost control, and reliable observability even at massive scale. The primary thing I worked on is the scaling part, including auto-scaling, and I also used load balancers to balance the load between worker nodes and the leader node. Auto-scaling is available and automatically adjusts the scaling part.
Which solution did I use previously and why did I switch?
I have not worked with other solutions directly, but recently I had an opportunity to speak with the Edge Delta founder who wanted me to review Edge Delta versus Cribl. In that discussion, I remembered some points such as high scalability and auto-scaling being features in Cribl and not in Edge Delta, but Edge Delta may be able to compete on price at some point. When they integrate AI, there may be some additional advantages. Since I work for my organization, the organization bears the whole cost, and I have not directly purchased Cribl software. There are some features that could be included in the basic package, similar to Power App tools in Microsoft. There are many advanced features that require paying additional fees. Some basic features could be added directly to the subscription plan rather than being offered as custom configurations or particular add-ons.
How was the initial setup?
The setup was straightforward with no complexity. Every application nowadays has a seamless experience, and three years ago when I was getting into Cribl, it was already very interactive for me. One additional observation is that there are not many learning videos for Cribl on YouTube platforms or free learning platforms other than Cribl University. I think they will slowly integrate into other streaming platforms as well so that it will be more helpful for users to get into the application.
What about the implementation team?
I did not require an implementation team. When I signed up with credentials, I created an account by signing up with all the details and filling out the form using Cribl's payment gateway. I followed the same process as I would for AWS or Azure . I did not use different options to buy from the Azure platform. I received the credentials directly and just logged in with them. When I was getting certification, I was redirected to their website to buy directly, not from any vendor apps.
What was our ROI?
The most talked about point for Cribl is that it is one of the most seamless applications to work on. The speed at which it processes data and handles high ingestion volumes is why it is one of the most expensive platforms. I have not worked with anything other than Cribl, so I am not able to compare. However, since my client is Microsoft and I ingest heavy amounts of data every day, Cribl has been handling this very well.
Which other solutions did I evaluate?
I have not worked with Cribl Search very much, but I worked extensively with Cribl Stream. From my certification, I remember that Cribl Search's Search-in-Place feature allows me to query data when it is already living. Without re-ingesting data into a SIEM, I can search it through Cribl dashboards. For example, I keep data in the SIEM for 7 to 14 days, for months or years in object storage. Cribl Search allows federated on-demand logs and metrics. When platforms can access data without ingesting it directly into the SIEM, I can directly use the on-demand function, and it is mainly used for cost-effective historical search or investigations that have already been done in past years. This Cribl Search feature helps me check seasonality impact, such as comparing last year's revenue percentage to this year's revenue. This helps me make better decisions about the market.
What other advice do I have?
To improve Cribl, I would focus on comparing performance and architecture with other tools. High volume efficiency can be made more seamless, such as improving the identification of noisy sources via metrics and sampling repetitive logs. This feature already exists, but I am talking about how to make it more efficient. I will focus on the high volume data part, reducing data complexity, making performance metrics more visible, and the dashboard can be more interactive. Integration of AI tools can be much more helpful. I am pretty sure that the developers of Cribl have been working on that and an update will come soon with AI integration. However, I need to ensure that data is secured as much as possible because data security is non-negotiable for data engineers.
Cribl is a very interactive application for me and one of my favorite applications to work on. I hope to have more opportunities to work with Cribl. The cost part is very high compared to alternatives such as Edge Delta, which offers much cheaper prices. However, price comes with a cost, and speed and security come with a price.
Integrating AI is one of the most valuable improvements. It will most likely be Copilot because I do not think OpenAI will agree to integrate with Cribl, or Cloud may also come in, but I believe Copilot will be first. Integration of Copilot will be a big advantage for everyone. I would not need to run scripts or go back to documentation to check function syntax because there are many functions I need to use in day-to-day life, and it is very hard to remember every function syntax. When I integrate AI, it will directly help me get the functions. I just need to provide the prompt needed, extract the data from the Copilot chat, and use it in my day-to-day life. My overall review rating for Cribl is 9 out of 10.
Log management has become efficient as data volume reduces and security insights improve
What is our primary use case?
My primary role involves transforming customer's DDI environments to newer environments, migrating things from legacy platforms to newer platforms. A couple of my clients had the challenge of log analysis. DDI or DNS DHCP and IPAM environment logs are quite large. When the logs need to be sent to SIEM , Splunk, or any other log analysis environment, the licensing cost is substantial. They were looking for options to leverage this and reduce log size while maintaining visibility. I came across Cribl , a beautiful product that fascinated me. I was also evaluating a couple of other products including DataDog, but Cribl fascinated me because you can customize your requirements. Based on your requirement, you can channelize the logs, make the logs available as needed, and deduplicate things. Many things can be done in Cribl environment. I worked along with the LogStream team with the clients and we set up Cribl environment to pass logs from the DDI environment to Splunk.
In my current field of DDI transformation as an enterprise architect, I have close to 22 years of IT experience working as an enterprise DDI architect.
Cribl handles high volumes of diverse data types such as logs and metrics very efficiently because the data volume is managed very efficiently. Cribl is primarily for reducing the data volume and log volume. Analytics is the area where they need to improve. When passing query logs or DNS logs, if certain malicious query patterns need to be identified or if fast-flux attacks are happening, Cribl can report that and those would be definitely a plus for them. Even if those features are there, or may not be there, I couldn't find those options in Cribl. That's one area where they need improvement. Out of the box integrations with different DDI platforms would be definitely a plus. I couldn't explore much into those areas.
What is most valuable?
What I like most about Cribl is basically two things. One is the data reduction. When passing syslogs, syslogs are huge, ranging from gigabytes to terabytes in size. When the syslogs need to go to the security operations team or security team for log analysis and event monitoring, it's a nightmare for them to analyze all the syslogs. Cribl intelligently formats them. It intelligently extracts the data from the syslogs and then reduces the size of the syslogs by almost 30 to 40 percent, which I have seen practically. It removes any null values that are not required. It strips down whatever is required and just discards whatever is not required.
Secondly, sometimes in the logs, you find some unnecessary information, such as just an IP, some site ID, or what we call the circuit ID. Cribl fetches GeoIP information or checks for the reputation of domains if DNS queries are going to certain domains. Based on RPG response policy zone files, it adds those additional fields to the log so that the logs can be enriched. When the traditional logs don't show the accurate values, this makes them more user-friendly and more user-readable format. Those are basically the two things that I appreciate about Cribl. It basically presents what is required out of a syslog output.
I have been using Cribl for somewhere around two to three years.
What needs improvement?
What I dislike about Cribl is that it represents my direct pain point. I basically do DDI migration, which is transforming a legacy architecture to a newer platform. My expertise is in Infoblox DDI . If a customer environment is running with Microsoft or some old bind Linux based DNS DHCP solution, I consult them and if they are willing to move to Infoblox DDI , I help them migrate. The only thing is when we are doing the integration of Cribl, Cribl doesn't have any out-of-box customization packs for Infoblox. Whatever is available is only in the community. I need to go through the community page, download each customization pack or many filters and check whether that filter applies or not. Nothing is out of the box from Cribl. I have sent a couple of requests to Cribl earlier. If these could be available, because Infoblox is a market leader in the DDI segment and if Cribl has a native integration with them, then putting out-of-the-box integration with Infoblox with some filter packs and customization packs would be great for Cribl LogStream.
Analytics is the area where they need to improve. When passing query logs or DNS logs, if certain malicious query patterns need to be identified or if fast-flux attacks are happening, Cribl can report that and those would definitely be a plus for them. Even if those features are there, or may not be there, I couldn't find those options in Cribl. That's one area where they need improvement. Out of the box integrations with different DDI platforms would definitely be a plus. I couldn't explore much into those areas.
I haven't used the new Search in Place technology feature of Cribl Search as of now because my recent engagement with a client where I deployed Cribl and the Cribl log analysis log channel was not there. If I get any chance to deploy for any other client, I will get through that feature.
Regarding Cribl's user interface when managing log processing tasks, the newer interface looks cool compared to the initially clumsy interface. However, those aspects can be improved. I have seen that when switching between dark theme and white theme, some text is not visible clearly in the dark theme and the graphs are very hard to read. If they could improve that, it would be great.
The initial deployment of Cribl is one area where it needs to be improved because the initial deployment takes some time. Specifically, for complex platforms such as an Infoblox DDI platform where there are no out-of-box customization packs available, you need to go through community portals and Cribl community blogs to find scripts and customization packages. It takes some time, but once that is set, it becomes easy. It's quite easy after that.
For how long have I used the solution?
I have been using the solution for two to three years.
What do I think about the stability of the solution?
I haven't contacted technical support because we couldn't have gotten any outage or situations where it was not working. I just worked for in small stints for different clients, so that's why I didn't contact technical support on those things. The self-help things and documentation are really good. Cribl has certain videos available where you can go through them and get knowledge.
Cribl doesn't require any maintenance on my end because on the DDI side, no maintenance is required. When sending the log to Cribl, Cribl is passing the logs but storing them. Maintenance will be only required if it's hosted on a VM and the disk space becomes less, then you need to increase the disk space. Basically that is taken care of by the VM team. Ideally in every enterprise, the virtualization team or data center team is different. For the storage issues, they can take care of that. Cribl is just passing and storing the logs. If Cribl is passing on device, then they need bigger storage, and if the storage is becoming less, then they need to increase the storage. That is the kind of maintenance I see, not from the source side.
What do I think about the scalability of the solution?
Cribl is definitely scalable because you get a platform which is kind of vendor-agnostic. Today, you have one platform, maybe a client is using Infoblox DDI, so they are sending the logs to Cribl. Tomorrow, if some other platform they are using for DDI, the log analysis channel or the log plane doesn't get affected with that. If tomorrow you need a little more processing or analysis, you add more instances of Cribl and that becomes scalable. You can scale it horizontally. Vertically also, you can add storage. Both ways it is scalable, horizontally and vertically.
How are customer service and support?
I haven't contacted technical support because we couldn't have gotten any outage or situations where it was not working. I just worked for in small stints for different clients, so that's why I didn't contact technical support on those things. The self-help things and documentation are really good for them. Cribl has certain videos available where you can go through them and get knowledge on that.
How would you rate customer service and support?
Negative
How was the initial setup?
The initial deployment of Cribl is one area where it needs to be improved because the initial deployment takes some time. Specifically, for when you have a complex platform such as an Infoblox DDI platform where there is no out-of-box customization packs available and you need to go through community portals, Cribl community blogs and find the scripts and customization packages, it takes some time. Once that is set, it becomes easy. It's quite easy after that.
What about the implementation team?
One or two people can deploy Cribl. That's not a big deal. You don't need a big team to deploy it. At most I can tell two people, that's all.
What's my experience with pricing, setup cost, and licensing?
I still have no idea about pricing because pricing and price point is basically determined by the customer with whom I work. It's taken by a very separate team, the finance team, and they decide on what price it should be. What I have seen in my implementation career with Cribl is that the licensing cost of Splunk is significant because Splunk is volume-based licensing. The more volume of data you are sending, the price also increases. Whatever they save from the Splunk side is ideally adjusted in Cribl pricing. It's a win-win situation from both ends. You save price from Splunk and you use Cribl and eventually you have a lower TCO, lower total cost of ownership at the end.
Which other solutions did I evaluate?
When I was looking for these kinds of solutions, I had come across DataDog and Kafka. Those are not easily available and cross-platform as Cribl. I couldn't explore more into those other alternatives. I got a good product and I stick with that. I didn't check for others.
What other advice do I have?
Regarding firewall logs, I can't directly tell you the exact information because my firewall is not my area of expertise. I have definitely seen logs decrease in the Splunk logs for a DDI platform with Cribl. If Cribl forwards the logs of firewall to Splunk, then definitely there will be a decrease in the firewall log, but I can't tell exactly how that would be. I have given this product a rating of 9 out of 10.
Data routing has reduced firewall noise and now optimizes log volumes and costs
What is our primary use case?
My use cases for Cribl basically involve being part of a Splunk theme organization where I was brought in to do a soft confirmation program, and I was onboarding more and more logs into Cribl as my license costs kept going up. We did some filtering using Cribl.
What is most valuable?
What I liked the most about Cribl is the way it handled firewall logs and the way it could handle Microsoft Windows server logs as well.
Cribl's ability to contain data cost and complexity is actually very good. I don't have a problem with Cribl whatsoever. It's not one of those products that says it does something it doesn't. I still think that vendors trying to compete against Cribl are going to lose this one.
Cribl handles high volumes of diverse data types such as logs and metrics very well. I was handling approximately three terabytes of logs a day, and I have had no problems with it at all. I'm sure there are bigger organizations out there, but three terabytes is still substantial. The enterprise organization I worked for had over a hundred thousand employees on a global scale and twenty thousand servers, so it's a big company.
What needs improvement?
Some downsides of Cribl include that it was quite a long sales cycle for us, but that was probably partly my fault as well. There weren't really any negatives on the product itself.
Cribl can do better by tightening up their Cribl packs, as I think there were numerous flavors of different configurations that weren't supported. There were a lot of unsupported Cribl packs and they probably need to get that certified or do something about that.
For how long have I used the solution?
I have been using Cribl in my career for about two years in a previous role.
What do I think about the stability of the solution?
Regarding stability, I have not seen any lagging, crashes, or downtime at all with Cribl.
What do I think about the scalability of the solution?
Regarding scalability, we obviously worked for a larger enterprise-based organization, and we had to build resilience into our solution. Cribl was scalable, so there were no problems with it.
How are customer service and support?
I know we had access to Cribl University. I don't think we actually made any calls to Cribl support.
How would you rate customer service and support?
Neutral
Which solution did I use previously and why did I switch?
I have used alternatives, and we evaluated the Splunk offering. I can't remember the name of it now. Splunk had a name for it, but that wasn't as good because it didn't actually segment the logs into different buckets. I had to ingest the whole bucket, and I didn't want that. We did look at other products on the marketplace, but obviously vendor-specific to Splunk.
How was the initial setup?
The initial deployment was easy. We had a design, and we went through our own processes internally to get that all done. We put some exceptions criteria in place for what we did, and we built it out in the cloud, and we did the connections cloud to cloud. It was paced as easy.
What about the implementation team?
For the deployment, we had two people: my internal guy and the Cribl presales engineer who helped me out.
What was our ROI?
I have seen a decrease in firewall logs with Cribl of about seventy percent.
What's my experience with pricing, setup cost, and licensing?
Regarding current pricing, it was based on an ingress-based model that we used, and it was favorable. It was cheaper than the Splunk license. We didn't have a problem with the purchase.
What other advice do I have?
It took us only a couple of weeks to fully deploy Cribl. We got it up and running, went through batches of what we were doing, and set up the Cribl stream and the heavy forwarders, and got all that working. It wasn't too bad. We looked at some of the Cribl packs, which are the predefined configurations. It was easy to get set up. It was cloud to AWS cloud in our case.
Cribl did not require any maintenance on my end. I'm not the technical person; I'm the program manager. I would rate this product an 8 out of 10.
Which deployment model are you using for this solution?
If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?
Centralized log routing has simplified multi-destination forwarding and improved data management
What is our primary use case?
We use Cribl for log management.
What is most valuable?
Cribl has the ability to send data to different destinations, making it a vendor-agnostic tool. For log management, we can parse values or enhance fields at Cribl level and then send it to different destinations such as S3 , Splunk, Elastic, or other destinations. This feature is the one I love most because it acts as an intermediate heavy forwarder which can route data to different destinations.
Cribl is intuitive and user-friendly in navigating the UI.
What needs improvement?
Some of the integrations such as SNMP need improvement, and I feel Cribl should improve on SNMP integration and also on the database monitoring space. These two areas need improvement.
For how long have I used the solution?
I have been using it for one and a half to two years.
What do I think about the stability of the solution?
Cribl handles volume of logs effectively. In case of any issues, Cribl support does their job in resolving the issues. Overall, it handles the volume of logs very effectively.
How are customer service and support?
I rate the technical support for Cribl as nine out of ten.
How would you rate customer service and support?
Positive
Which solution did I use previously and why did I switch?
Cribl is solving these issues and bridging the gap. There is Splunk which is equivalent to Cribl, but Cribl is currently leading in this space. There may be other alternatives, but they are still in evolving phase. Cribl is a mature product.
How was the initial setup?
Cribl is easy to deploy. Spinning it up does not take much time, just about a week's time. However, getting the data in and configuring those destination sources will take time.
What was our ROI?
For scalability, I would rate it as nine out of ten.
What's my experience with pricing, setup cost, and licensing?
I am not aware of the data cost. However, Cribl solves the complexity of having different agents installed. If we shift from Splunk to Elastic, we would have to get a new agent installed and point our applications to Elastic. With Cribl, it solves the complexity of having multiple agents in between and forwarding data. We can forward it to Cribl and then Cribl can send it to wherever we like. This kind of complexity is something it solves.
Which other solutions did I evaluate?
Big businesses use Cribl.
What other advice do I have?
I assess the stability of Cribl as eight out of ten. I recommend Cribl for others looking to implement this product. I would rate Cribl overall as eight out of ten.
