Sign in Agent Mode
Categories
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

Reviews from AWS customer

9 AWS reviews

External reviews

13 reviews
from

External reviews are not included in the AWS star rating for the product.


4-star reviews ( Show all reviews )

    reviewer2785254

Fast analytics for dashboard queries has reduced latency and improved customer experience

  • December 08, 2025
  • Review from a verified AWS customer

What is our primary use case?

I generally use ClickHouse for group by queries, and it is really fast in ClickHouse to do the group by and show the summation, metrics, and latencies.

What is most valuable?

Performance and scalability of ClickHouse are huge, allowing us to store billions of records while returning records in a very fast way, with latencies hugely optimized now.

ClickHouse has positively impacted our organization by optimizing costs and significantly reducing latencies, which has improved customer experience greatly.

Regarding specific metrics, latencies are hugely reduced; previously, we had a latency of one minute, but now we have a one-second latency, marking a huge improvement.

What needs improvement?

The pain points I have experienced with ClickHouse include the initial learning curve, which was somewhat challenging.

I gave ClickHouse a nine out of ten because, despite its great performance, the initial learning curve and the need for more detailed documentation kept it from being a perfect ten.

For how long have I used the solution?

I am using ClickHouse for about two years, and our main use case is to show dashboard analytics queries.

What do I think about the stability of the solution?

ClickHouse is very stable in our experience, and we have evaluated Apache's solution related to Druid and other Apache databases.

What do I think about the scalability of the solution?

In terms of scalability, ClickHouse has really good scalable engines and features, showing solid core performance and implementation.

How are customer service and support?

I have not really interacted with ClickHouse support; we generally find our solution on online portals such as Stack Overflow or AI, and once I interacted on ClickHouse's Slack channel.

How would you rate customer service and support?

Neutral

Which solution did I use previously and why did I switch?

We have not used any solution other than ClickHouse, as we directly use this based on our market research.

What was our ROI?

I have seen a return on investment with ClickHouse, having saved a lot of money and time, and now our database is up and running with few outages or problems.

What's my experience with pricing, setup cost, and licensing?

My experience with pricing, setup cost, and licensing for ClickHouse was good; it was straightforward, and I am satisfied with it.

What other advice do I have?

For others looking into using ClickHouse, I advise you to learn about it very thoroughly, utilizing YouTube videos, documentation, and general guides, as it will be helpful for optimized usage.

In general, I think it was a great experience using ClickHouse, and I have no additional thoughts.

I think ClickHouse is doing great, providing good documentation and tutorials on how to use ClickHouse or how to write queries effectively to improve performance.

I gave this review a rating of nine out of ten.


    AmitVerma

Real-time IoT analytics have boosted insights while an AI agent now answers device activity queries

  • December 07, 2025
  • Review provided by PeerSpot

What is our primary use case?

ClickHouse has been used for the last one year.

The primary use case involves IoT devices. Software has been developed to onboard IoT devices, which send data at varying frequencies. Analysis must be provided to users based on these different data transmission patterns. A dashboard allows users to onboard their IoT devices and analyze their data. The volume of data is substantial. For example, if a company has one lakh IoT devices sending data every 10 minutes, the data generated in one month can reach several GB to TB. Real-time analysis is required to determine how many times devices were active or inactive, week-wise device activity, total average voltage for energy meters, and many other analytical insights.

ClickHouse has delivered exceptional performance for this use case. Testing was conducted on over 10 million rows, performing count, sum, average, aggregation by week, aggregation by month, ordering, and sorting operations. ClickHouse provides responses within a few seconds, typically two to three seconds, which is impressive. An AI agent has also been built on top of ClickHouse for user-based queries. When a user asks a question such as how many devices are inactive for more than a month, the system directly contacts OpenAI, generates a ClickHouse query from the response, and submits it to ClickHouse. ClickHouse responds within ten seconds. Testing has been performed on over 10 million rows, and it is working well for the use case.

The two main use cases are analysis and an AI agent built on top of ClickHouse.

What is most valuable?

ClickHouse has delivered exceptional performance for this use case. Testing was conducted on over 10 million rows, performing count, sum, average, aggregation by week, aggregation by month, ordering, and sorting operations. ClickHouse provides responses within a few seconds, typically two to three seconds, which is impressive. An AI agent has also been built on top of ClickHouse for user-based queries. When a user asks a question such as how many devices are inactive for more than a month, the system directly contacts OpenAI, generates a ClickHouse query from the response, and submits it to ClickHouse. ClickHouse responds within ten seconds. Testing has been performed on over 10 million rows, and it is working well for the use case. The two main use cases are analysis and an AI agent built on top of ClickHouse.

Speed is the main valuable feature. Setup is straightforward. Several features are utilized including Materialized Views, simple views, ReplaceMergeTree, and Aggregation Tree. These features are used to aggregate results that remain unchanged. For example, monthly, weekly, and daily summaries are aggregated and remain unchanged because they are historical data. Materialized View is one of the most used and valuable features being leveraged.

What needs improvement?

Everything appears to function well. From a software engineering perspective, one consideration involves eventual consistency. In the case of ReplaceMergeTree, data duplication eventually gets corrected during the merge process. When merge parts are combined into one, duplication is removed. If duplication could be removed in real-time, that would be better. An info table has been created to provide the latest data per device. However, when the same device data is inserted again, it takes time. When merge parts combine, duplication is removed. At that point, when data is sent to the UI, it must be grouped by device ID and the last created date must be picked to avoid showing duplicate data. This is understood to be a limitation of the append-only nature, but a solution might exist to address this issue.

Real-time deduplication would be beneficial for clarity on when it occurs. Duplication will exist before the merge, and duplicates will be removed after the merge. ClickHouse provides ReplaceMergeTree, MergeTree, SummingTree, and AggregationTree. A tree family that guarantees deduplication with no duplicity would be valuable. Information should be provided to customers regarding performance trade-offs, such as a X to Y performance reduction, so they can decide if the trade-off is worth it. In this case, the latest data must be displayed, and users must see correct data. Currently, grouping is done outside the main query. A tree family could be provided that guarantees one hundred percent no duplication data, though the FINAL keyword is currently available, it requires developers to add it before and after queries with careful consideration.

Configuration complexity presents another improvement opportunity. Difficulty levels exist ranging from eight to nine out of ten. Setting nine is reasonable, as there must remain some improvement scope. Too many issues exist for beginners to set up ClickHouse. Many parameters must be configured, such as maximum scatter part settings that determine when writing to a table stops. Many parameters require careful setup, making it very difficult for beginners. Unlike PostgreSQL or MongoDB, which can be downloaded and run without difficulty, ClickHouse is not easy for beginners to set up. Improvement scope exists to enable easier setup. Default settings could be provided so that anyone can set up ClickHouse easily.

What do I think about the scalability of the solution?

Scalability was the main concern. Feedback has been very positive regarding ClickHouse's scalability.

How are customer service and support?

Customer support at ClickHouse has not been reached because cloud service has not been utilized. However, documentation and blogs have been thoroughly reviewed, and all issues have been resolved using these resources.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

The transition was made from MongoDB to ClickHouse. MongoDB was used when data from IoT devices required bulk writing due to frequent write operations. InfluxDB and Cassandra were considered as alternatives. However, when MongoDB was evaluated for analysis, it did not provide good performance. The decision was made to switch to an analytics-focused database. ClickHouse was discovered and selected, and has been in use since that decision.

RocksetDB and Google Spanner, along with other open-source solutions, were evaluated. After reading blogs, documentation, and reviews, and after testing some solutions including Spanner, Spanner was found to provide similar performance, but cost is the main concern. ClickHouse was selected because it is open-source and can be run on-premises, which is the primary requirement due to data security considerations.

How was the initial setup?

Speed is the main advantage, and setup is straightforward. Several features are utilized including Materialized Views, simple views, ReplaceMergeTree, and Aggregation Tree. These features are used to aggregate results that remain unchanged. For example, monthly, weekly, and daily summaries are aggregated and remain unchanged because they are historical data. Materialized View is one of the most used and valuable features being leveraged.

What's my experience with pricing, setup cost, and licensing?

Licensing and cost details are not available as these matters are managed by the DevOps team. An 8-core machine with 32 GB RAM is being used to run ClickHouse.

Which other solutions did I evaluate?

RocksetDB and Google Spanner, along with other open-source solutions, were reviewed. After reading blogs, documentation, and reviews, and after testing some solutions including Spanner, Spanner was found to provide similar performance, but cost is the main concern. ClickHouse was selected because it is open-source and can be run on-premises, which is the primary requirement due to data security considerations.

What other advice do I have?

An exact monetary value cannot be provided, but time savings from query execution can be quantified. Testing was conducted on three to four lakh rows using sum aggregation in both PostgreSQL and MongoDB. PostgreSQL and MongoDB required five to ten seconds on an 8 GB machine with a four-core CPU. The same ClickHouse instance provided results within one second. This represents approximately six to seven times faster query execution. ClickHouse delivers results within one second while MongoDB and PostgreSQL deliver results on the same data within six to seven seconds.

The main concern is that too many issues exist for beginners to set up ClickHouse. Many parameters must be configured, which can complicate setup for beginners. This hinders ease of setup compared to databases such as PostgreSQL or MongoDB, which are straightforward to run for beginners.

For initial settings, focus on reading ClickHouse's documentation. Specific default settings may be available to simplify initial setup. After reviewing various blogs, one problem encountered was the 'too many merge parts' error. This error occurred when frequently inserting data from APIs via Kafka. After adjusting the setup, the Kafka consumer had to be reset, and specific flags had to be tuned to prevent the error from recurring.

When using ReplaceMergeTree, caution must be exercised regarding duplication. Directly displaying data to end-users without considering duplication can lead to discrepancies in unique entries and potentially mislead customers, creating issues for stakeholders. To avoid this, opt for strict reading settings using accurate queries, tailoring adjustments to specific needs.

This review has been rated nine out of ten.


    reviewer2036529

ClickHouse has transformed streaming detection analytics and now delivers faster aggregated queries

  • December 07, 2025
  • Review from a verified AWS customer

What is our primary use case?

My main use case for ClickHouse at Infoblox involves receiving detection data, which we attempt to perform aggregation on and store in ClickHouse for faster query and access.

For the kind of detection data I work with, I use ClickHouse, specifically utilizing Aggregate MergeTree and deleting tables, along with indexing and sharding techniques for faster access.

In handling detection data with ClickHouse, we write two queries to retrieve data, but for storing, we use different Kafka table engine and Aggregate MergeTree to keep our data structured.

What is most valuable?

The best features ClickHouse offers in my experience include its performance and the various table engines it provides, allowing me to avoid writing large queries to access my shaped or fine-tuned data.

I mostly use the Kafka table engine, SummingMergeTree, and AggregatingMergeTree, which enhance performance because we work with streaming data, using Kafka as our input.

The features of ClickHouse that stand out for me are primarily centered around performance.

ClickHouse has positively impacted my organization by replacing PostgreSQL, which required complex foreign tables for queries. With ClickHouse, we now have Cube.js for easier data visualization.

I have seen specific improvements such as faster query times. For instance, queries that took 10 milliseconds on PostgreSQL are now approximately 50% faster due to improved storage and query performance.

What needs improvement?

ClickHouse can be improved, and the main challenge I see is its operational complexity.

One improvement I think ClickHouse needs, apart from operational complexity, would be around its documentation, which is already quite great.

For how long have I used the solution?

I started using ClickHouse in my current company about one year ago.

What do I think about the stability of the solution?

In my experience, ClickHouse is stable.

What do I think about the scalability of the solution?

ClickHouse's scalability is good as we manage it through Kubernetes, allowing us easy scaling up and down with ClickHouse operator and installation resources.

How are customer service and support?

I have not interacted with ClickHouse's customer support, as I focus mainly on query work, and any issues go through a separate team that contacts support.

How would you rate customer service and support?

Which solution did I use previously and why did I switch?

Previously, I used Snowflake and BigQuery in a different company, but ClickHouse is now in use. The reason for the switch is uncertain to me.

What was our ROI?

I can vouch for time as a return on investment with ClickHouse, but I am uncertain about the financial aspect.

What's my experience with pricing, setup cost, and licensing?

Regarding pricing, setup cost, and licensing, I have not been involved in the pricing part and am not fully certain about it.

Which other solutions did I evaluate?

Before choosing ClickHouse, I think my organization evaluated other options such as CockroachDB, though I am not entirely certain.

What other advice do I have?

My advice for others considering ClickHouse is to opt for it due to its scalability, performance, and deployment ease, especially with Kubernetes.

I believe ClickHouse is a great product that is maturing well, and although it may have flaws, it will overcome them and continue to serve users worldwide. I would rate this product an 8 out of 10.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?


    reviewer2785134

Analytics have driven product decisions and now provide faster, integrated reporting

  • December 06, 2025
  • Review from a verified AWS customer

What is our primary use case?

My main use case for ClickHouse is that it primarily drives our analytics and reports. I use ClickHouse for product analytics, and that mostly drives product decisions.

What is most valuable?

We moved away from Redshift to ClickHouse because of the integration and the flexibility that it provides, which best suited our use case. Most of the teams in my company use it as a central resource where all teams have their separate accesses to the databases that they work on within ClickHouse.

The best features ClickHouse offers are seamless integrations, data exports, and data imports, which fit well because we use Postgres as our primary database for our transactional databases. Seamless integrations help our workflow by allowing us to integrate data sources more easily, and the data exports and imports compare favorably to our previous solution.

ClickHouse has positively impacted my organization by driving our products because we use it for our product analytics, and the integrations make it easier to integrate new data sources.

What needs improvement?

ClickHouse could be improved with self-hosting capabilities and better documentation for how to host it at scale. I do not have anything in particular to add about the needed improvements around performance, UI, or anything else.

For how long have I used the solution?

I have been using ClickHouse for about a year.

What do I think about the stability of the solution?

In my experience, ClickHouse is stable.

What do I think about the scalability of the solution?

The scalability of ClickHouse is great. I would like to add that performance and the scalability needs are important aspects of ClickHouse.

How are customer service and support?

The customer support for ClickHouse is fine, and I have used it. I would rate the customer support as ten out of ten.

How would you rate customer service and support?

Positive

Which solution did I use previously and why did I switch?

I previously used Redshift, as it did not fit our use case.

How was the initial setup?

I purchased ClickHouse through the AWS Marketplace since that was how I could deploy it on AWS.

What was our ROI?

In terms of specific outcomes, I have noticed faster report generation, but I cannot really say the cost has reduced much. I have already mentioned what returns I got in terms of driving our product.

What's my experience with pricing, setup cost, and licensing?

My experience with pricing, setup cost, and licensing was straightforward, as it is open-source.

Which other solutions did I evaluate?

Before choosing ClickHouse, I evaluated all the major cloud database providers that have something analogous to ClickHouse.

What other advice do I have?

I would rate ClickHouse as a nine or an eight on a scale of one to ten. I chose nine because there are certain improvements, as I previously mentioned, that prevent me from giving it a ten. My advice for others looking into using ClickHouse is to understand your use case and choose accordingly, as it is good at many things and may fit well with analytics use cases. I appreciate the team behind ClickHouse; it is a really great product. My overall review rating for ClickHouse is nine.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)


    Raviteja Narayandasbangaru

Real-time telemetry analytics have transformed how our team monitors devices and trains models

  • December 06, 2025
  • Review from a verified AWS customer

What is our primary use case?

ClickHouse serves as our main OLAP system. We have almost three lakh plus IoT devices where we receive the raw feed through pipelines into ClickHouse, and from ClickHouse, we customize the data as per the site so that each individual site can be monitored. We display analytics by site, by dispenser, or by a particular device. Each individual site owner or their staff will have permissions and other things, so they can only access their site's behavior regarding transactions or alerts. This entire telemetry data is pushed to ClickHouse using Kafka pipelines and Mej AI pipelines. From there, we build advanced analytics. Given ClickHouse's columnar storage, we retrieve data very fast. Therefore, we also use the data directly in our models. Recently, we used the LightGBM and LSTM deep learning model, where the input data was fetched from ClickHouse at the site level, dispenser level, and also other dispenser devices level. We use a lot of indexes and deduplication because the raw data sometimes has a lag in a pipeline or is broken, resulting in duplicate data entering ClickHouse. ClickHouse provides us with various important functions such as the `argMax`, which allows us to get the latest timestamp or record using that. We also use the ReplacingMergeTree in one of our tables for generating alerts using some logic. Whenever a new alert is closed, we use the ReplacingMergeTree and back up old data into another table. We employ multiple ClickHouse functions, which have been very helpful for our product.

In addition to the use case already mentioned, we use Materialized Views mainly for pre-aggregation, which minimizes latency. We also have summary tables—metadata per device, for instance—and lookup tables for converting various strings to integer formats for quick lookups. We utilize pre-aggregated materialized views or temporary tables to minimize the database impact on storage. We use ReplacingMergeTree, Materialized Views, and Kafka streaming to ingest data into ClickHouse, along with Mej AI preprocessing.

What is most valuable?

The best feature ClickHouse offers us is its performance on large datasets. It is consistently fast for analytical queries involving billions of rows. Time range aggregations, group by operations, and joins are simplified, making our work more efficient. We utilize the MergeTree engine for partitioning and indexing, and I appreciate the simplicity of automatic indexing that occurs whenever we create a table by providing the order by clause. We have different engines such as MergeTree, ReplacingMergeTree, and aggregated pre-aggregated daily for time series. These options are easy and flexible, even for our junior engineers. The Materialized Views and rollups are particularly impressive, as we continuously build pre-aggregated tables at daily and hourly levels. The documentation provided by ClickHouse is also quite user-friendly and easy to navigate. Additionally, the columnar architecture allows for strong compression, enabling us to store a significant amount of history without excessive storage costs.

ClickHouse has positively impacted our organization. The low cardinality feature stands out as incredibly useful because it provides a smaller storage footprint, faster joins, and quicker group by queries. The TTLs support lifecycle management, allowing us to drop older raw data as discussed. We drive backups from our main primary table after 14 days into a cheaper disk. Projections, similar to lightweight materialized indexes, are also helpful for specific query patterns. Our vectorized execution improves the efficiency of extracting complex aggregated data over billions of rows within a short timeframe. ClickHouse supports JSON and semi-structured data excellently, making it easier to handle complex IoT telemetry data. The documentation, alongside its integrations with Python, Pandas, or AWS SageMaker, facilitates model training. We collect IoT telemetry data from three lakh plus devices, encompassing over 30,000 sites and about one and a half to two lakh dispensers. Our daily data load is initially 100 GB, which compresses down to approximately 10 GB in ClickHouse. We retain data for 14 days, resulting in a very quick latency of less than one second on our UI for machine learning API inferences, thanks to the efficient partitioning provided by ClickHouse. We also provide daily and hourly summary tables for our site teams, aiding our aggregations. The cost reductions due to compression benefit us significantly. We use the TTL function and carry out our data backup efficiently. Integrating with MLflow has simplified data retrieval for model training as ClickHouse provides the data quickly without encountering I/O failures in our pipelines.

What needs improvement?

ClickHouse could be improved further in several areas. I believe the areas where ClickHouse could improve involve more intuitive error messages. Additionally, it would be beneficial to have short instructional videos for users new to ClickHouse, which could help them understand its functionalities better. The documentation is already quite comprehensive, but a summary of key points differentiating ClickHouse from others would be beneficial.

In terms of managing multiple Materialized Views, tracking their lag can be tricky, and additional features for tracking query lineage or transformations could enhance the user experience.

What do I think about the stability of the solution?

ClickHouse has proven to be stable in my experience. We have not encountered any serious bugs as we run it in production for over five years. My company started in 2018, and it is now 2025, so I have gained experience over one to two years. I can confidently say that it is very consistent and stable even when handling high volume loads and real-time streaming analytics across financial and operational domains. It remains consistent as data grows, and ingestion has also been stable. Materialized Views, rollups, and performance have proven reliable, with no major outages experienced.

What do I think about the scalability of the solution?

ClickHouse demonstrates impressive scalability. Currently, I'm working with billions of rows, and I can confidently say that ClickHouse is very scalable. The vertical scalability is impressive, with high insert throughput, allowing millions of rows per second with low latency. Our data growth is significant, and our horizontal scalability is effective since we distribute it across clusters. We also take advantage of parallel processing with our multiple CPU cores to aid scalability, making ClickHouse easy to scale.

How are customer service and support?

The customer support for ClickHouse has been excellent. We interacted through email and utilized a chatbot, where the responses were typically prompt and constructive. When we faced any challenges, the ClickHouse support team provided helpful resolutions.

How would you rate customer service and support?

Negative

Which solution did I use previously and why did I switch?

Before ClickHouse, we did not use any other solutions at this company. Initially, we deployed ClickHouse on our EC2 clusters and instances but later moved to ClickHouse Cloud. However, we do use MySQL for transactional database needs, as both analytics and transactional requirements complement one another, allowing us to have efficient operations.

What was our ROI?

ClickHouse Cloud saves a lot of time for our DevOps engineers since there isn't much deployment involved and everything is served directly, leading to increased efficiency and productivity. Our storage efficiency significantly reduces costs. While I cannot provide exact numbers, I can affirm that the time I save as a data scientist and ML engineer is considerable. I don't wait two or three hours for data to be read from CSV files or to utilize Spark anymore. I receive data instantly in minutes even for large aggregations. Therefore, analysts and engineers like myself do not spend time waiting on heavy SQL queries. The developers instantly access the data they need, improving overall productivity. I estimate we save four to five hours per person per week due to this efficiency, translating to around 20 to 25 hours saved monthly for each individual. Although I cannot quantify the monetary savings precisely, our 10x data compression has made a substantial difference in operational efficiency. The onboarding experience for junior engineers is also pleasant, given the straightforward design of ClickHouse.

What's my experience with pricing, setup cost, and licensing?

I did not deal much with pricing issues. I discussed the pricing with our CTO and founding members, and I found ClickHouse's pricing to be efficient in comparison to other services such as Redshift. The setup costs are flexible for ClickHouse hosting, and the privacy aspects are simple to navigate. Overall, my experience has been quite good.

Which other solutions did I evaluate?

Prior to choosing ClickHouse, we evaluated several options including Redshift, BigQuery, and Snowflake. We did not select them because they are optimized more for batch reporting rather than real-time analytics. Their higher storage and compute costs for managing billions of events of telemetry data were additional deterrents. Even our tests with Redshift, PostgreSQL, and TimescaleDB showed slower aggregations and performance limitations at high scales, prompting us to choose ClickHouse, which provides fast analytical performance, excellent compression, and beneficial SQL flexibility.

What other advice do I have?

For those considering ClickHouse, I suggest it as an excellent choice if your use case revolves around analysis and real-time streaming. The customer support, reliability, and easy-to-understand documentation combined with SQL familiarity make ClickHouse a preferable product versus others. The competitive customer support ensures we can get assistance quickly. ClickHouse's data compression is vital for minimizing storage costs, while functions such as low cardinality, `argMax`, `ReplacingMergeTree`, and different MergeTree engines prove beneficial for various smaller use cases within your product. I would rate this product an 8 overall.

Which deployment model are you using for this solution?

Private Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)


    Aditya Aditya

Centralized log analytics has boosted dashboard performance and reduced infrastructure costs

  • December 06, 2025
  • Review from a verified AWS customer

What is our primary use case?

My main use case for ClickHouse is for logging, where our whole application, consisting of microservices, sends the logs to ClickHouse, and the second major use case is for the dashboard, as we rely heavily on ClickHouse for our major dashboards.

We have multiple scenarios in which we have 62 to 63 microservices running, and we use ClickHouse for parallel log sending, which works fine. ClickHouse is a very performant database in storing logs, and fetching logs from 10 to 60 days back happens in no time, which is the best thing about ClickHouse. In terms of dashboards, we use it for querying and displaying data in Grafana, and we appreciate the support it offers for direct communication with Grafana to visualize our data further.

What is most valuable?

ClickHouse is a very performant database in storing logs, and fetching logs from 10 to 60 days back happens in no time, which is the best thing about ClickHouse. We appreciate the support it offers for direct communication with Grafana to visualize our data further.

ClickHouse positively impacted our organization by absorbing the whole logging system without hassle, storing logs for six months efficiently.

Regarding performance, we tried multiple solutions when Kibana was failing, including PostgreSQL, MySQL, and even MongoDB for log ingestion of huge volumes, but ClickHouse outperformed all databases we tested, leading us to choose it for further use cases.

What needs improvement?

We would like to have fuzzy search capabilities in ClickHouse like we had with Kibana because there are scenarios where we cannot search keywords fuzzily in ClickHouse, whereas Elasticsearch allows that, and in such cases, Elasticsearch outperforms ClickHouse.

We would love to have a fuzzy search feature in ClickHouse to allow searching across rows without hassle, similar to Kibana, making querying individual or multiple rows a very easy process in ClickHouse. Additionally, the UI of ClickHouse is pretty basic, and I would love to have a more efficient UI along with improved security features.

For how long have I used the solution?

I have been using ClickHouse extensively for various purposes in my previous company, so we have been using ClickHouse for around two years.

What do I think about the stability of the solution?

ClickHouse is stable in my experience.

What do I think about the scalability of the solution?

ClickHouse is quite scalable, especially when we are using an AWS instance.

How are customer service and support?

I didn't get a chance to interact with ClickHouse's customer support.

How would you rate customer service and support?

Negative

Which solution did I use previously and why did I switch?

Earlier, we were using Elasticsearch and Kibana, but we needed to switch to ClickHouse, realizing it was awesome to work with. In scenarios where Kibana and Elasticsearch were not allowing us to parallelly write logs to the database, I carried out a performance test under heavy load, verifying that ClickHouse supports parallel execution of logs, and it performed really well, which is the core feature we love about ClickHouse.

How was the initial setup?

We didn't have any issues regarding pricing, setup cost, or licensing.

The integration and documentation for ClickHouse are flawless; we didn't need to add anything because every feature is explained so well that we did not have concerns about how everything is going to work.

What about the implementation team?

We set up ClickHouse manually using the AWS instance and the open-source version.

What was our ROI?

We have saved a lot with ClickHouse regarding parallel execution, efficiently storing logs that we ingest, which used to be around 10 to 20 million rows per day, all executed in parallel. Previously, we had to increase the servers and run up the disks for Kibana, but with ClickHouse, we didn't need to spend much on resources, cutting costs by around 25 to 30%.

What's my experience with pricing, setup cost, and licensing?

We didn't have any issues regarding pricing, setup cost, or licensing.

Which other solutions did I evaluate?

We tried various options, but ClickHouse outperformed every solution, proving to be the best choice for us.

What other advice do I have?

ClickHouse is really easy to use, and I don't have anything further to add because it integrated directly into our application without any hassle.

ClickHouse is a go-to database for performance. It natively supports features such as compression, making it a great choice for logging, so it is highly recommended for anyone needing raw performance out of the box.

It is just a typical customer relationship with the vendor. I received a gift card.

I would rate ClickHouse around 8.5 to 9 on a scale of one to ten because there are some scopes for improvement, as I mentioned earlier, and I chose a rating of nine out of ten for this review.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)


    reviewer2784981

High-speed IOC ingestion has improved threat detection and now supports rapid analytical queries

  • December 05, 2025
  • Review from a verified AWS customer

What is our primary use case?

My main use case for ClickHouse is data ingestion and for its OLAP properties, as we had use cases where database locks were slowing us down and because ClickHouse does not have that, we chose to use it.

I could give you a quick, specific example of how I'm using ClickHouse for data ingestion where the lack of database locks helped us when we were parsing IOCs among other things, as a lot of that data has to be processed really quickly and ingested into the database for further processing and identifying which IOCs are compromised.

ClickHouse helped us solve the problem that we were having and it's one of the two databases we used at Cyware—one was Postgres, the other was ClickHouse.

What is most valuable?

The best features ClickHouse offers are its OLAP features because, given that there are no database locks and its eventual consistency, that is the biggest feature that we have or that solved our problems.

The eventual consistency and lack of database locks specifically benefit my team in terms of speed and reliability, as once data is ingested, we have to quickly process it and then show the outputs to the user, say there are ten indicators of compromise, and we have our own database where we tally whether these IPs or IOCs that we are scanning right now are marked or red flagged before or not, so we have to quickly scan them, process them and then give an output, and that helped us with the reliability part, the speed part, while eventual consistency is used on a different side of the product.

ClickHouse has positively impacted my organization, as there was an entire exercise done on which database we were supposed to use for solving our problems, and we found ClickHouse was the one performing the best, which is when we adopted it.

What needs improvement?

ClickHouse can be improved on the documentation side, and there is one small constraint that is mentioned in ClickHouse documentation, which is a partition limit of ten thousand that we hit, so if that can be increased or there are workarounds around it, that would be great.

I chose nine out of ten because, as I mentioned, the improvement side and the ten thousand partition limit created issues that we were hitting quite frequently, but with some schema manipulations we did manage to find a workaround, although that could have been avoided had things been better documented on how we could have solved this problem in a different approach, which took some bandwidth.

I do not have any other improvements I think ClickHouse needs, besides the documentation and partition limit.

For how long have I used the solution?

I have been using ClickHouse for about a year, maybe slightly more than that.

What do I think about the stability of the solution?

ClickHouse is stable, as we did not encounter stability issues in production, but in the dev environment, one of the seniors did flag one specific point where we found some inconsistencies, although I think they did find a workaround around it, but it was stable for us.

What do I think about the scalability of the solution?

ClickHouse's scalability is good.

Which solution did I use previously and why did I switch?

We previously used Postgres, and we encountered issues with Postgres, which was again, as I mentioned, why we did a study on switching.

What was our ROI?

I have seen a return on investment, as I can share that on the engineering side we had improvements in database performance, but for the metrics asked, time saved, fewer employees needed, I do not have them.

What's my experience with pricing, setup cost, and licensing?

My experience with pricing, setup cost, and licensing was such that the setup costs were just my own bandwidth, while licensing and pricing were done by other members of the team so it was abstracted away from me, and I am not aware of it.

Which other solutions did I evaluate?

Before choosing ClickHouse, we evaluated other options such as Apache Druid and Pinot from Apache, and then there was a study.

What other advice do I have?

The advice I would give to others looking into using ClickHouse is that on the engineering side, if there is some OLAP use case or anywhere where data needs to be ingested at very high rates or there is a use case for eventual consistency, then perhaps it can be used. I gave this review a rating of nine out of ten.

Which deployment model are you using for this solution?

Hybrid Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?


    reviewer2784735

Real‑time blockchain analytics have transformed how our team delivers client insights

  • December 04, 2025
  • Review from a verified AWS customer

What is our primary use case?

My main use case for ClickHouse is that I used it earlier at my previous companies for transactional data of blockchain, so we could provide analytical data to our clients.

A specific example of how I used ClickHouse for blockchain transactional data is that our clients would not directly interact with ClickHouse database, but we had our entire SaaS application that used to do that. We had a lot of ETLs that wrote into ClickHouse database that we built, with ClickHouse being one of the primary databases we used for real-time calculations of different account balances or tracking entities through multiple channels.

The unique aspect about my main use case with ClickHouse is that we made it into multiple shards and had multiple replicating instances, alongside using replicated merge trees and neat materialized views. However, the most painful part was the cold starts of new chains, as materialized views need to be filled, deleted, and rebuilt again because they only track upserts, not deletions, which wasted significant amounts of my time.

What is most valuable?

The best feature ClickHouse offers is speed, no question.

I would add that the connectors and ClickPipes are really cool tools, and the ETL capabilities around ClickHouse are also improving, with good Kafka integrations, making ClickHouse excellent, particularly with many connectors to ClickHouse and CDC systems.

ClickHouse positively impacts my organization by potentially saving costs if you are migrating from an OLAP database or OLTP database to an OLAP database, but cost-wise, it depends on how you set it up and how many replicating systems you need, with bare-metal costs depending on that. There is no better option than ClickHouse in all OLAP-based databases, so I think it is best to use ClickHouse in that regard.

What needs improvement?

ClickHouse can be improved, primarily in terms of the materialized views that do not change based on the base table, which is painful to keep reusing due to the demonstrated architecture.

I would add that we could benefit from more connection options with ClickHouse, as the connections are good, and while the documentation is solid, it could use more graphics to help explain things, particularly since I have not seen the documentation in the last year and a half. Support-wise, the team is really solid, and I had a chat with them a few times—they are good product folks.

My experience with ClickHouse's documentation is that it needs improvement; I think it can be made more beginner-friendly, while the community support is really good.

For how long have I used the solution?

I have been using ClickHouse for three years.

What do I think about the stability of the solution?

ClickHouse is stable.

What do I think about the scalability of the solution?

ClickHouse's scalability is a ten.

How are customer service and support?

I did not use customer support that much.

My experience with ClickHouse's security features is limited, as I have not used many of them.

I am satisfied with the monitoring and observability features in ClickHouse, rating them as good, not great.

How would you rate customer service and support?

Negative

How was the initial setup?

The ease of setting up and configuring ClickHouse for my needs is a nine.

The setup and configuration process is a nine for me because I think the onboarding could be better.

What was our ROI?

In terms of return on investment, my focus was more on developer productivity than on money saved per se, but compared to other databases we used, it was fairly in a similar range. The number of employees needed did not change initially, but we could reduce the amount of employees needed when we migrated to ClickHouse Cloud.

What's my experience with pricing, setup cost, and licensing?

My experience with pricing, setup cost, and licensing indicates that it is very expensive—ClickHouse is the most expensive option.

What other advice do I have?

After switching to ClickHouse, I noticed specific outcomes such as cost savings if we put the right engines in place, and our metrics indicated at least 500 requests per second at peak times, with more typical requests being around 20 or 30 per second, handled adequately by our systems. We started with about 200 GB of data during testing, which grew to about five or six TB in production, indicating our capacity to handle a huge dataset.

My advice for others looking into using ClickHouse is that it is a good system that remains stable. I would rate this review as a nine.

Which deployment model are you using for this solution?

Public Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?

Amazon Web Services (AWS)


    Hitaish Kumar

Centralized metrics have reduced storage costs and now power faster real-time workflow monitoring

  • December 04, 2025
  • Review from a verified AWS customer

What is our primary use case?

ClickHouse is used for an OLTP use case where we store a ton of metrics and logs from our communication platform.

The workflow operates as follows: when communication starts being sent out, it goes through multiple microservices and each microservice appends a log internally to ClickHouse. We then use those logs to define metrics and a dashboard to monitor the efficiency and execution of the service.

ClickHouse serves as the core of our operations. Additionally, we use it for batch insertion of data or for moving data from one place to another, with ClickHouse acting as a stage in between.

What is most valuable?

ClickHouse offers several features that help with these use cases: data compression, which helps us maintain low storage costs, and fast query time. The merge engine is quite fast compared to other relational databases.

When we wanted to let the user know how many transactions have been staged from one workflow to another and keep them updated, we can query very fast from the staging table about the count and also the metadata of all the records, which is how it helps.

ClickHouse has reduced our storage cost and improved our 99th percentile latency by 40%.

What needs improvement?

A query client specifically for ClickHouse, a web query client, would be very helpful.

ClickHouse should be able to import data from other types of sources like Parquet and Iceberg tables and all the new upcoming data formats. It would be beneficial for ClickHouse to be updated on that.

For how long have I used the solution?

ClickHouse has been used for almost two years.

What do I think about the stability of the solution?

ClickHouse is stable based on our experience.

What do I think about the scalability of the solution?

The open-source version of ClickHouse is not very scalable.

How are customer service and support?

We do not have any experience with customer support.

How would you rate customer service and support?

Which solution did I use previously and why did I switch?

We were using Postgres and switched from Postgres to ClickHouse for more compressed data, less data size, and faster queries.

How was the initial setup?

There was no experience with pricing, setup cost, or licensing because we were using the open-source version.

What about the implementation team?

The open-source version of ClickHouse is being used.

What was our ROI?

There is a 40% reduction in money and time.

What's my experience with pricing, setup cost, and licensing?

There was no experience with pricing, setup cost, or licensing because we were using the open-source version.

Which other solutions did I evaluate?

Some extensions of Postgres such as PG-columnar were evaluated.

What other advice do I have?

Evaluate ClickHouse really well with your actual production use case before implementing it. This review gives ClickHouse a rating of 9 out of 10.

Which deployment model are you using for this solution?

Private Cloud

If public cloud, private cloud, or hybrid cloud, which cloud provider do you use?


    Aswini Atibudhi

Provides real-time data insights with high flexibility and responsive support

  • May 08, 2025
  • Review provided by PeerSpot

What is our primary use case?

I have experience in ClickHouse, and we also use Apache Druid, which has corporate support from Druid, along with data products in Hadoop. We are currently exploring many platforms such as GMI, TKI, and Vertex.

I use ClickHouse as a merchant side portal, especially when we started exploring how to use the data, which was coming from multiple sources such as logs, mainframe, Teradata, and many file systems that come to the data lake. The real-time challenge was joining the data and providing more analytical queries for our merchants, who work throughout the year to improve GMB, sales, and ensure the right quantity of items is ordered at the right time. That's the challenge for the merchants, and we aim for fast analytical queries on larger databases, which is why we selected ClickHouse as our columnar OLAP database supporting real-time analytics with its own SQL interface.

We have installed both local Docker versions, which are quite scalable, and usually connect with BI tools such as Grafana, Superset, and Tableau while utilizing materialized views, DDLs partitions, and many other connectors with Python, such as ClickHouse connectors and drivers. It's exciting to see how ClickHouse has evolved, and we are evaluating ClickHouse Cloud while also having the on-premises version.

We are already a customer of ClickHouse, with Sam's Club utilizing it on the merchant side while also exploring ClickHouse for consumers, primarily for user analytics, metrics, and streaming data analysis in ad tech. Additionally, we use custom analysis and metrics for fraud detection in payments and ad campaign metrics, with various teams utilizing it for ad campaign management and user behavior analytics, particularly on e-commerce sites focusing on customer behavior. It's extensively used due to its low latency, fast aggregations, and excellent OLAP columnar storage, featuring quick joins and real-time data visibility, making ClickHouse very appealing to us.

What is most valuable?

ClickHouse is very easy to use; one of the good features is that it has joins, which were not present in Druid, and Druid was quite expensive, especially with our applications at Sam's Club utilizing ClickHouse very quickly.

ClickHouse deserves a rating of 9 when compared to competitors, particularly Druid, which is stable but comes with higher costs and subpar support. ClickHouse proves to be more lightweight, offering low latency and high throughput, along with joins, making it especially good for log and metrics handling.

What needs improvement?

The basic challenge for ClickHouse is the documentation, which isn't ideal, but it's mature and stable with more columnar storage, compression, and parallel processing, making it the best for OLAP. In terms of improvements, it's not designed for very frequent small writes, making it less scalable in write-intensive workloads, and it's not flourishing in transactional use cases or when ingesting streaming data, such as batching or buffering, which is something ClickHouse will improve.

What do I think about the stability of the solution?

ClickHouse is quite stable, and it deserves a rating of 9.

What do I think about the scalability of the solution?

ClickHouse deserves a scalability rating of 8 since it's quite scalable but has some room for improvement regarding scaling challenges.

How are customer service and support?

The support team has its own community support on platforms such as Slack Overflow and ClickHouse Slack. Commercially, the company provides enterprise support, especially for Sam's Club through ClickHouse Cloud. We utilize AVN ClickHouse, which is effectively managed by AVN, providing bug fixes and developing new functionalities along with architecture reviews. I appreciate their 24/7 support which is beneficial, although those using open source might face some challenges. Overall, the enterprise support is quite good.

How would you rate customer service and support?

Positive

How was the initial setup?

The initial setup for ClickHouse is relatively easier compared to Flink; however, for newcomers, it is quite challenging. I find it easier in terms of API with single-node setups through Yum or apt taking only a couple of minutes to install. Planning cluster setups is a bit complex, primarily an admin task, and while a single-node setup is easy, managing ClickHouse Cloud is extremely easy. Creating clusters can vary from moderate to difficult based on the scale, typically from 5 to 10 nodes, depending on the use case.

What other advice do I have?

I would recommend this solution. Overall rating: 9 out of 10.