Databricks Data Intelligence Platform
Databricks, Inc.External reviews
750 reviews
from
and
External reviews are not included in the AWS star rating for the product.
Databricks Unifies Data, Analytics, and ML for Scalable Lakehouse Workflows
What do you like best about the product?
Databricks is especially helpful because it brings data engineering, analytics, and machine learning together in a single unified platform, which reduces the need to manage multiple separate tools. Built on Apache Spark, it can process massive datasets quickly and scale smoothly as workloads grow, making it a strong fit for big data use cases. It also supports collaborative notebooks where teams can work together in languages like Python and SQL, which makes it easier for data scientists and engineers to collaborate effectively.
With its lakehouse architecture powered by Delta Lake, Databricks combines the flexibility of data lakes with the reliability of data warehouses, helping ensure better data consistency and performance. In addition, it integrates with tools like MLflow to streamline the machine learning lifecycle end to end, from experimentation through deployment. Overall, Databricks simplifies complex data workflows, improves performance, and helps organizations build scalable data and AI solutions more efficiently.
With its lakehouse architecture powered by Delta Lake, Databricks combines the flexibility of data lakes with the reliability of data warehouses, helping ensure better data consistency and performance. In addition, it integrates with tools like MLflow to streamline the machine learning lifecycle end to end, from experimentation through deployment. Overall, Databricks simplifies complex data workflows, improves performance, and helps organizations build scalable data and AI solutions more efficiently.
What do you dislike about the product?
Databricks does have some limitations, although many of them feel more like trade-offs than outright negatives. A frequently cited drawback is cost: while the platform is flexible and scalable, expenses can rise quickly if clusters aren’t managed carefully. At the same time, that cost often reflects its ability to handle very large workloads efficiently when it’s properly optimized.
Another consideration is the learning curve, especially for beginners who aren’t familiar with Apache Spark or distributed systems. That complexity can be challenging at first, but it also comes with the benefit of powerful capabilities once you get comfortable with it. Some users also find that debugging and performance tuning are less straightforward than with simpler tools; however, Databricks offers detailed monitoring and optimization features that can make these tasks easier over time.
Finally, because it’s a managed platform, there can be a sense of reduced control compared with fully self-managed systems. In return, it removes much of the operational burden that comes with infrastructure management. Overall, while these areas may be seen as the “least helpful” aspects, they’re often balanced by the platform’s scalability, integration, and productivity gains.
Another consideration is the learning curve, especially for beginners who aren’t familiar with Apache Spark or distributed systems. That complexity can be challenging at first, but it also comes with the benefit of powerful capabilities once you get comfortable with it. Some users also find that debugging and performance tuning are less straightforward than with simpler tools; however, Databricks offers detailed monitoring and optimization features that can make these tasks easier over time.
Finally, because it’s a managed platform, there can be a sense of reduced control compared with fully self-managed systems. In return, it removes much of the operational burden that comes with infrastructure management. Overall, while these areas may be seen as the “least helpful” aspects, they’re often balanced by the platform’s scalability, integration, and productivity gains.
What problems is the product solving and how is that benefiting you?
Databricks helps solve the challenge of fragmented data and disconnected workflows across multiple business verticals by providing a unified lakehouse platform. In my role as a data engineer, this allows me to consolidate data from different sources into a single, reliable system using Apache Spark for scalable processing and Delta Lake for ensuring data quality and consistency. This significantly reduces pipeline complexity, improves reliability, and enables faster delivery of clean, governed data to downstream teams. As a result, I’m able to support analytics and machine learning use cases more efficiently while minimizing operational overhead and improving overall productivity across the organization.
Intuitive Analytics with AI Genie, Needs Performance Tweaks
What do you like best about the product?
I really like the inclusion of the AI Genie into Databricks. It helps to make data analytics easier and more intuitive for me. I am able to query my datasets using natural language, translating it to SQL queries and generating visualizations and insights.
What do you dislike about the product?
I would like to have some flexibility around operational complexity and performance tuning.
What problems is the product solving and how is that benefiting you?
Databricks helps me fix data duplication and inconsistency with idempotent pipelines. I also use AI Genie to query datasets using natural language, translating into SQL queries and generating visualizations, making analytics easier.
Versatile Platform with Robust Data Governance
What do you like best about the product?
I personally like the Databricks UI, especially the dark mode. Technically, I find Unity Catalog's built-in lineage and governance very valuable. Auto-loader's incremental file processing with exactly-once guarantees and Delta Lake's ACID reliability are my personal favorites. Delta Lake's ACID transactions ensure our data pipelines either fully succeed or fully roll back, which prevents partial writes from corrupting tables. Time travel in Delta Lake allows us to query previous versions of our table for audits without needing separate snapshots. Unity Catalog's capability to auto-track lineage across our entire pipeline is critical for regulatory audits, and its role-based access control and column masking ensure data access is properly managed across teams. The workspace and notebook setup were straightforward, making the initial setup relatively easy.
What do you dislike about the product?
Migrating from hive_metastore to Unity Catalog is painful with limited tooling - UCX helps but it's still a heavy lift. Databricks-to-dbt cloud orchestration lack a clean native handoff, forcing custom API polling code that's fragile and hard to debug. Cost visibility for Serverless SQL warehouse could be more granular - it's hard to attribute DBU spend to specific pipelines or dbt models without digging into system tables manually.
What problems is the product solving and how is that benefiting you?
Databricks replaced our fragmented data stack with one platform for ingestion, ETL, analytics, and governance. Unity Catalog handles regulatory lineage needs by auto-tracking data provenance.
Great Infrastructure for Reliable Data Management
What do you like best about the product?
They have really great and initiative infrastructure that gives all that we need for our data management.
What do you dislike about the product?
All the techy issues that we are having we consult with their support team and got solution for it.
What problems is the product solving and how is that benefiting you?
Overall the reliability and features that Databricks providing is the most helpful and privious for us that make work more easy and hassle free.
Fast, Seamless Databricks for Big Data Pipelines, and Analytics in One Place
What do you like best about the product?
What I love most about Databricks is how fast and connected everything is.
Compared to other platforms, it handles heavy big data pipelines without breaking a sweat. But the best part is how easy it is to use that data once it's processed.
Whether I need to build a quick analytics dashboard or train custom machine learning models specific to our data, it all connects seamlessly. It just takes the headache out of moving data around and lets you do everything in one place.
Compared to other platforms, it handles heavy big data pipelines without breaking a sweat. But the best part is how easy it is to use that data once it's processed.
Whether I need to build a quick analytics dashboard or train custom machine learning models specific to our data, it all connects seamlessly. It just takes the headache out of moving data around and lets you do everything in one place.
What do you dislike about the product?
If I had to choose what I dislike, it mainly comes down to the cost and how complex it can be.
First, it can get expensive very quickly. If you’re not careful about managing your computing clusters and shutting them down when you’re done, the bills can creep up on you.
Second, it can sometimes feel like overkill for simpler tasks. Since it’s built for massive data, having to dig through complicated error logs when something breaks can be a real headache compared to using lighter tools.
First, it can get expensive very quickly. If you’re not careful about managing your computing clusters and shutting them down when you’re done, the bills can creep up on you.
Second, it can sometimes feel like overkill for simpler tasks. Since it’s built for massive data, having to dig through complicated error logs when something breaks can be a real headache compared to using lighter tools.
What problems is the product solving and how is that benefiting you?
The main problem Databricks helps me solve in my business is performance. We used to wait for hours for pipelines to run in ADF, and now we can get them done in minutes.
Streamlines Data Engineering with Minor Delays
What do you like best about the product?
I like using Databricks because it helps me create fast processing ETL pipelines and solve orchestration and storage issues. I appreciate the genie because it helps me gain fast insights from the data. It reduces my processing times by days of discovery to hours and makes my quotes competitive to clients. The onboarding was smooth with intuitive features, which makes my job easier.
What do you dislike about the product?
I feel Databricks clusters to be slow in spinning up with long wait time for just small tasks as well.
What problems is the product solving and how is that benefiting you?
Databricks lets me create fast processing ETL pipelines, solving orchestration and storage issues. Plus, with Unity Catalog, I manage governance smoothly without worrying about background complexities.
Databricks Streamlines End-to-End ETL with Unity Catalog and AI-Powered Debugging
What do you like best about the product?
What stands out to me is how Databricks simplifies the end-to-end ETL lifecycle. The platform’s steady integration of new features has noticeably reduced the friction of ingesting data from a wide range of source systems.
Unity Catalog (UC) has also been a game-changer for data administration. It offers a centralized, robust governance layer that makes managing complex environments feel much more intuitive and easier to control.
I’m especially impressed by the recent AI-driven updates. Genie Code has become an essential part of my workflow; it has dramatically improved my debugging speed and is already proving to be a valuable asset in my current UC migration project. Overall, the way Databricks blends traditional data engineering with assisted intelligence feels genuinely forward-thinking.
Unity Catalog (UC) has also been a game-changer for data administration. It offers a centralized, robust governance layer that makes managing complex environments feel much more intuitive and easier to control.
I’m especially impressed by the recent AI-driven updates. Genie Code has become an essential part of my workflow; it has dramatically improved my debugging speed and is already proving to be a valuable asset in my current UC migration project. Overall, the way Databricks blends traditional data engineering with assisted intelligence feels genuinely forward-thinking.
What do you dislike about the product?
While Auto Loader is powerful, there are still notable gaps in the Lakehouse Data Pipeline (LDP) around schema inference. Right now, when inferSchema is enabled, the inferred schema only applies to the first level of the hierarchy. In complex datasets with multi-nested fields, the lack of deep schema inference creates manual overhead and makes streaming CDC pipelines harder to build and maintain.
Lakeflow Connect feels like a step in the right direction, but the library of native connectors still seems incomplete compared to some competitors. And while the AI features (like Genie) are promising and genuinely interesting, they still come across as being in a “developing” stage—sometimes lacking the consistency you need for high-stakes production environments. I’d like to see these capabilities evolve from “innovative extras” into hardened, production-ready tools.
Lakeflow Connect feels like a step in the right direction, but the library of native connectors still seems incomplete compared to some competitors. And while the AI features (like Genie) are promising and genuinely interesting, they still come across as being in a “developing” stage—sometimes lacking the consistency you need for high-stakes production environments. I’d like to see these capabilities evolve from “innovative extras” into hardened, production-ready tools.
What problems is the product solving and how is that benefiting you?
The Problem: Data Silos & Inefficient Support Operations
In many organizations, critical institutional knowledge ends up scattered across disconnected systems such as MySQL (structured), Jira (transactional), and Confluence (unstructured). When information is fragmented this way, support teams struggle to find fast, accurate answers for incoming tickets. The result is higher MTTR (Mean Time to Resolution) and a lot of repetitive, manual effort.
The Solution: A Unified “Intelligence Platform”
Databricks addresses this by serving as a single fabric that connects these silos. In my work, I focus on using the Lakehouse Data Pipeline (LDP) to ingest and unify these different sources into one governed environment.
How this benefits my project:
I use Databricks for seamless ingestion, centralizing data from MySQL, Jira, and Confluence to build a comprehensive “Knowledge Base” without having to manage multiple, disparate ETL tools.
I also rely on native AI integration. With Mosaic AI Vector Search, I can convert the unified data into embeddings directly within the platform, which lets me build an AI Automation Agent for our ticketing system.
Finally, it supports automated solutioning. The agent can run vector matching on newly created tickets against the full historical knowledge base and then propose accurate, context-aware solutions to engineers right away.
The Impact
The biggest benefit for us is operational velocity. Databricks has shifted our data from a passive archive into an active, “intelligent” engine. It reduces time spent on manual research and helps us automate the first line of support, improving the accuracy of ticket resolutions while lowering the burden on our technical teams.
In many organizations, critical institutional knowledge ends up scattered across disconnected systems such as MySQL (structured), Jira (transactional), and Confluence (unstructured). When information is fragmented this way, support teams struggle to find fast, accurate answers for incoming tickets. The result is higher MTTR (Mean Time to Resolution) and a lot of repetitive, manual effort.
The Solution: A Unified “Intelligence Platform”
Databricks addresses this by serving as a single fabric that connects these silos. In my work, I focus on using the Lakehouse Data Pipeline (LDP) to ingest and unify these different sources into one governed environment.
How this benefits my project:
I use Databricks for seamless ingestion, centralizing data from MySQL, Jira, and Confluence to build a comprehensive “Knowledge Base” without having to manage multiple, disparate ETL tools.
I also rely on native AI integration. With Mosaic AI Vector Search, I can convert the unified data into embeddings directly within the platform, which lets me build an AI Automation Agent for our ticketing system.
Finally, it supports automated solutioning. The agent can run vector matching on newly created tickets against the full historical knowledge base and then propose accurate, context-aware solutions to engineers right away.
The Impact
The biggest benefit for us is operational velocity. Databricks has shifted our data from a passive archive into an active, “intelligent” engine. It reduces time spent on manual research and helps us automate the first line of support, improving the accuracy of ticket resolutions while lowering the burden on our technical teams.
Unified ML Platform That Removes Infrastructure Friction
What do you like best about the product?
The unified platform experience is genuinely hard to beat — having MLflow for experiment tracking, Unity Catalog for governance, vector search, and serverless endpoints all in one place removes so much infrastructure friction. Feature engineering pipelines and model deployment feel cohesive rather than stitched together. The SQL warehouse + notebook hybrid workflow also makes it easy to hand off between data engineering and ML work without context switching tools.
What do you dislike about the product?
Serverless endpoints have some sharp edges — Spark context initialization behaves differently than in interactive clusters, which can cause silent failures if you're not careful about where you initialize things. Cold start latency on serverless is also noticeable for low-traffic production endpoints. Documentation around some of the newer features (like vector search index configs) tends to lag behind the actual product behavior, so you end up doing a lot of trial and error.
What problems is the product solving and how is that benefiting you?
We use Databricks to consolidate ML model development, feature engineering, and deployment for a cards and payments platform — work that previously required juggling separate tools for data processing, training, and serving. The unified environment means our ML engineers can go from raw transaction data to a deployed churn prediction model without leaving the platform. MLflow tracking keeps experiments reproducible, and Unity Catalog gives us the data governance story our banking client needs. It's cut down a significant amount of the coordination overhead that comes with multi-tool ML pipelines.
From 1 Hour to 10 Minutes: How Databricks Modernized Our Workflow
What do you like best about the product?
We used to use ADF to get data from SQL Server and then work on it in Databricks before putting it into Salesforce. The whole process took a time more than an hour because ADF added extra work.
Now everything happens inside Databricks. We transform the raw data in Databricks and put in into Salesforce all in one place. This has made the whole process much faster, it now takes 10 minutes. That is an improvement from what we had with ADF.
Delta Lake has also been really useful. It helps us keep track of changes and go back if something goes wrong. We can see what happened before . Fix mistakes easily.
Delta Lake also makes sure the data is good before it goes into the pipeline. It stops data from getting in and causing problems later on in Salesforce. This makes the whole process more reliable and easier to take care of.
Now everything happens inside Databricks. We transform the raw data in Databricks and put in into Salesforce all in one place. This has made the whole process much faster, it now takes 10 minutes. That is an improvement from what we had with ADF.
Delta Lake has also been really useful. It helps us keep track of changes and go back if something goes wrong. We can see what happened before . Fix mistakes easily.
Delta Lake also makes sure the data is good before it goes into the pipeline. It stops data from getting in and causing problems later on in Salesforce. This makes the whole process more reliable and easier to take care of.
What do you dislike about the product?
Databricks is really good at what it does.. Sometimes it takes a while to get the cluster up and running.. The user interface is slow at sometimes. This can be annoying when we are in a hurry to get things done for Salesforce. The Salesforce connectors in Databricks can be a bit tricky to work with. They often need to be set up right and do not work as we expect. This means we have to put in work when we are trying to figure out problems or keep an eye on the pipelines, in Databricks for Salesforce.
What problems is the product solving and how is that benefiting you?
It is solving our performance and reliability issues - by allowing us to extract, transform and load the data into Salesforce all in one place without ADF. This unified workflow has reduced our runtime from 1 hour to 10 minutes giving us faster job completion and on time Salesforce data updates.With delta lake features like ACID transactions and time travel,our data is more accurate and easier to recover when something goes wrong.
Centralized Dashboard with Smooth, Cost-Saving Autoscaling
What do you like best about the product?
Everything is centralized is a single dashboard spark jobs, notebooks and data pipelines. Autoscaling and auto termination genuinely help keep costs under control, and we could was a pleasant surprise that both run smoothly without any noticable lag. Sharing notebooks with the team is straightforward and cuts down on alot of back and forth.
What do you dislike about the product?
Finding older queries is really paunful. Anything beyond a few weeks becomes hard to track down, which makes it difficult to keep my data to day work flowing smoothly and to continue working without constant interruptions.
What problems is the product solving and how is that benefiting you?
We run ETL and ML workloads without having to worry too much about the underlying infrastructure. I can also manage inventory information, at least to some extent, without opening a bunch of different tabs. I spend less time troubleshooting clusters and more time actually working with the data.
showing 31 - 40