Sign in Agent Mode
Categories
Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help

dbt Platform

dbt Labs

Reviews from AWS customer

7 AWS reviews

External reviews

207 reviews
from and

External reviews are not included in the AWS star rating for the product.


    Alexander C.

Streamlines Data Transformation with Best Practices

  • May 05, 2026
  • Review provided by G2

What do you like best about the product?
I like the amount of variety that dbt offers because of the Jinja code and the inbuilt functions. The incremental models are very well built-in, offering lots of capabilities at a layer beyond what's in the data warehouse, like Redshift. The initial setup of dbt is very straightforward, which I find really helpful.
What do you dislike about the product?
I guess the development cycle of dbt is slower as a result. Writing YAML file descriptions and the actual code for every single model can lead to a slower development cycle.
What problems is the product solving and how is that benefiting you?
dbt allows us to handle data with software engineering best practices, letting us write tests, store code in a git repository, and manage it like a software project.


    Dylan C.

Easy-to-Use DBT for Version-Controlled SQL Models

  • May 05, 2026
  • Review provided by G2

What do you like best about the product?
I use DBT for a lot of our sql models to version control. It's easy to use and helpful
What do you dislike about the product?
So far there really hasnt been anything wrong with the platform. You need admin that know what they're doing
What problems is the product solving and how is that benefiting you?
Data model version control and merging updates into snowflake tables on schedules


    Surjit-Choudhury

Data transformations have streamlined complex reporting and support reusable macros for multiple clients

  • April 30, 2026
  • Review from a verified AWS customer

What is our primary use case?

In Power BI, I am currently creating solutions for this particular organization or team. I work end-to-end, providing the complete solution by understanding business requirements and KPIs, and building dashboards from end-to-end. This includes working with Fivetran, DBT, Python scripts, and other tools. I have been working with dbt for three years.

What is most valuable?

dbt is generally used with Jinja technology, and Jinja format is what I utilize. The structure of the scripts is different from other tools, and it is quite versatile, allowing me to use Python, SQL, or any other language. dbt mainly handles semi-structured data quite effectively, supporting major business transformations. dbt is used for transformation purposes, and I provide the business logic in the dbt scripts which run under the Git pipeline. Currently, due to cost cutting, we revised our technology strategy and created the pipeline with dbt for budget purposes. The database is loaded and business transformations are done through dbt, and it has a separate pipeline which loads the data into the database. We use Git or Bitbucket for versioning, and the code is stored there, with all business logic incorporation done within dbt.

dbt has reusable macros that can be created and used in multiple models, which I find very valuable. When I create the final tables, they are in the model folder, and under the model, these final tables are created. It also has a structured way of handling data, allowing me to mold it out effectively. A main feature is the ability to manage different pipelines, especially since I utilize Bitbucket for pipelining processes. In this, I can handle the scripts, determining which job should run based on various dependencies. I write loading processes in the .YML file, which I implement inside Bitbucket and in dbt scripts. The macros allow me to write multiple utilities or multiple scripts that can be reused in different models effectively.

The way dbt handles semi-structured data is by allowing me to easily manage any requirements or KPIs that come with it. I can handle it in the dbt scripts.

What needs improvement?

The initial setup of dbt is somewhat complex. Writing the scripts requires understanding Jinja technology, as the code writing structure is different compared to other tools, which can be challenging for developers unfamiliar with it. However, once I learned the structure, it became a robust tool for handling data, including semi-structured data like JSON.

dbt itself is quite extensive, and while many features are available, I often focus on common features. For unusual activities, I may not have enough experience to determine necessary changes or new features. Currently, I cannot suggest any changes or additions, primarily because I am working with structured data and not encountering many challenges with the dbt scripts. It successfully achieves our requirements.

What do I think about the stability of the solution?

The reliability of data in dbt is strong. When I conduct dbt tests, the data processed in the data warehouse performs exactly as expected. There are no interruptions during processing, ensuring consistency.

What do I think about the scalability of the solution?

dbt is quite scalable since it has its own feature set for incorporating business logic, while the data storage occurs in Snowflake, allowing me to handle complex scenarios as needed effectively.

How are customer service and support?

I have not had to communicate with dbt's technical support or customer service thus far, as my internal organization typically handles complex scenarios. If my DevOps teams are unable to resolve issues, I would consider reaching out, but that scenario has not arisen to date.

Which solution did I use previously and why did I switch?

Regarding pricing, I am not deeply involved in that aspect. However, due to pricing increases, we have transitioned to using dbt pipelines for running our jobs. Fivetran was previously our tool, but after they raised their prices, we started using dbt at the beginning of 2025. Overall, I find dbt to be optimized compared to other tools.

In evaluating other solutions, we have our own data warehouse in Snowflake, where we can explore features such as Snowflake pipes for structured data. Additionally, I have worked with Teradata and manipulated data using temporary tables. Snowflake offers more features than Teradata, allowing coding in both Python and SQL, making it a versatile option alongside dbt for loading, storing, and processing data.

How was the initial setup?

The initial setup of dbt is somewhat complex. Writing the scripts requires understanding Jinja technology, as the code writing structure is different compared to other tools, which can be challenging for developers unfamiliar with it. However, once I learned the structure, it became a robust tool for handling data, including semi-structured data like JSON.

Which other solutions did I evaluate?

Regarding pricing, I am not deeply involved in that aspect. However, due to pricing increases, we have transitioned to using dbt pipelines for running our jobs. Fivetran was previously our tool, but after they raised their prices, we started using dbt at the beginning of 2025. Overall, I find dbt to be optimized compared to other tools.

In evaluating other solutions, we have our own data warehouse in Snowflake, where we can explore features such as Snowflake pipes for structured data. Additionally, I have worked with Teradata and manipulated data using temporary tables. Snowflake offers more features than Teradata, allowing coding in both Python and SQL, making it a versatile option alongside dbt for loading, storing, and processing data.

What other advice do I have?

dbt SQL model is what I create, and I develop different macros and utilities that handle various client databases effectively, as each client has a separate database but maintains the same table structure. For instance, I have created J&J_DWH for Johnson & Johnson, with most clients holding the same data structure, but there can be exceptions where certain clients might have fields missing. To manage this, I write checks in the dbt scripts so that if a specific column is not present for a client, the code does not stop. It takes measurable steps and creates a column with the same name but with null values. This utility is essential for handling complex scenarios across multiple clients.

The testing framework in dbt is useful, as I run dbt tests based on the number of clients, specifically running tests for a few clients based on their names. It generally runs unit test cases in the testing environment. The scripts I created validate successfully, and I can trace errors in the logs to identify any issues. In the .YML file, I document relationships, uniqueness tests, and other necessary details. Before final data loads, I run dbt tests to confirm that the data is accurately loaded into the table.

Regarding dbt's documentation site generator, it is extremely helpful for project transparency, particularly in complex scenarios. Organizations provide good documentation, and I refer to dbt.org to resolve issues or clarify doubts on activities I have not previously handled. This aids in ensuring data transparency and assurance during review processes with clients, enabling me to justify my methods based on the documentation and organizational standards. I would rate this review nine out of ten.


    Hithesh P.

dbt Streamlines Data Pipelines with Powerful Incremental and SCD2 Features

  • April 29, 2026
  • Review provided by G2

What do you like best about the product?
dbt simplifies the process of building a solid data pipeline by offering a lot of features that would be difficult to implement from scratch. In particular, the SCD2 and incremental functionality helps remove a lot of overhead for developers and makes ongoing maintenance easier. There are also many other features that are great and contribute to a smoother overall workflow.
What do you dislike about the product?
There’s nothing I dislike about it, but I do have one suggestion:adding a feature for backfilling data (historical loads) will help a lot. Right now this can be done using a macro, but having an inbuilt option similar to incremental would make it much easier and help a lot.
What problems is the product solving and how is that benefiting you?
DBT is like a framework for data engineering. Building ETL using a traditional approach is very time-consuming and error-prone things like dependency issues, documentation, and testing all require extra precautions and a lot of manual effort.

That’s where dbt comes in as a lifesaver. It helps us build pipelines by providing features like lineage, auto-generated documentation, testing, macros, integrated Jinja, and more, which makes the overall process much easier to manage.


    Olena C.

Fast, High-Performance Cloud Modeling

  • April 24, 2026
  • Review provided by G2

What do you like best about the product?
speed of building models, performance, cloud based
What do you dislike about the product?
a steep learning curve for non-technical users, high resource consumption with large datasets, and lack of built-in scheduling.
What problems is the product solving and how is that benefiting you?
dbt (data build tool) solves the critical problem of inefficient, siloed, and unverified data transformation by allowing data analysts and engineers to transform data within their warehouse using SQL


    nayan S.

Best-in-Class SQL Transformations in dbt

  • April 08, 2026
  • Review provided by G2

What do you like best about the product?
SQL Transformation is the best thing which dbt have.
What do you dislike about the product?
Nothing as of now. but pricing seems high for model run
What problems is the product solving and how is that benefiting you?
ETL and data engineering portion. We develop metrics and fact tables in dbt transformation


    Harshwardhan Gullapalli

Data pipelines have improved financial accuracy and now build transparent audit-ready reports

  • April 07, 2026
  • Review provided by PeerSpot

What is our primary use case?

My main use case for dbt was transforming messy, extracted financial data into clean, production-ready datasets. Let me give you a concrete example. We would extract trial balance data from PDFs using OCR and then feed it through our GenFin mapping workflow. Before that data reached our GPT-4 model for classification, we used dbt to normalize account codes, handle currency conversions, work with UAE and multi-currency scenarios for UAE compliance, and create aggregations by cost center and account type. So dbt's job was essentially to take a raw extraction output, validate it against our schema, handle any missing values or duplicate entries, and then materialize clean tables that our LLMs could reliably work with.

There was one more important place beyond just transformation. dbt gave us visibility into data quality throughout the pipeline. We built tests into our models, including checking for null values in critical fields such as account numbers, ensuring amounts were numeric and within expected range, and validating that our transformed data matched the source record counts. This was critical because if there was a gap between extraction and transformation, we would catch it before it hit the LLM. The dependency graph in dbt was invaluable when we had issues downstream in our mapping or disclosure notes workflows, as we could trace back through the DAG.

The most concrete outcome was a significant reduction in data errors reaching our downstream AI models. Before dbt, we were catching bad extractions only after the LLM had already processed them, which meant manual rework. After implementing dbt's testing layer, we caught roughly 70% of those issues at the transformation stage itself, before they ever touched the model. Processing speed also improved because dbt's incremental models only processed the delta into new records. Our nightly reconciliation runs for UAE corporate tax workflows dropped from around 40 minutes to under 10 minutes. The less obvious win was team confidence. Our chartered accountant clients started trusting the outputs more because we could show them the full transformation chain.

The shift in client trust was really tangible. Before dbt, when we delivered a mapped disclosure notes document, clients would ask us to walk them through how we arrived at the specific numbers. We would have to manually trace back through extraction logs, which was time-consuming and sometimes incomplete. After dbt, we could literally show them the DAG, the dependency graph, and say, here is your source PDF, here is where we extracted the account code, here is the normalization rule we applied, and here is the aggregation, and here is your final mapped output. It is all documented and testable. That transparency eliminated a huge amount of back-and-forth. One specific example involved a client who was questioning why a particular cost center total did not match their internal records. With dbt's lineage, we traced it in minutes, found a rounding rule that needed adjustment in one model, fixed it, and reran the pipeline. The client saw the complete audit trail. That kind of visibility is what builds trust in a financial team.

One thing I would add positively was how well dbt's incremental models handled our late-arriving data scenarios. In financial processing, you often get corrections or amendments to documents days later, and incremental models let us efficiently merge those without full refreshes. That was really valuable for our UAE compliance workflows where reconciliation amendments came in batches.

What is most valuable?

dbt's incremental model let us handle late-arriving data efficiently without reprocessing everything. That clean output then fed directly into our GPT-4 structured output parser for financial mapping. The workflow fit really well because dbt gave us clear data lineage. We could trace any number of our final mapping output back to the source extract. That is critical in financial-based reporting, where auditors need to see the full transformation chain. For us, the standout features were the testing framework and the dependency graph. The tests, especially custom tests for financial data like validating that debits equal credits, caught a lot of our data quality issues early.

dbt's incremental model was key here. Rather than representing the entire dataset every run, dbt only touched the new or changed records, which kept our pipeline efficient as document volume grows. That said, dbt's scalability is fundamentally tied to your underlying database. In our setup, we managed performance by being deliberate about materialization strategies, using views for lightweight transformations and tables for incremental models for heavy aggregations.

What needs improvement?

As for something I wish we had, dbt's native support for Python transformations came later, and we did some complex financial classification calculations that felt clunky in pure SQL. We ended up writing Python in our n8n workflows and then fed the results back into dbt, which created a bit of a split-brain situation. If we would have had dbt Python models earlier, we could have kept that logic unified.

Managing multiple reporting standards was our biggest operational pain point with dbt. We were running UAE corporate tax compliance and IFRS disclosure workflows simultaneously for different clients, and dbt does not have a native concept of multi-tenant or multi-standard project organization. Everything lives in one flat structure, so we had to build more conventions: separate schema folders for IFRS models versus UACT models, custom macros to tag models by compliance regime, and environment variables to control which set of transformations run for which client.

For how long have I used the solution?

I have been working with dbt for approximately one and a half years.

What do I think about the stability of the solution?

dbt has been very stable.

What do I think about the scalability of the solution?

For our use case, dbt scaled well. We were processing large volumes of financial documents, hundreds of trial balances, balance sheets, and invoice sets, and dbt handled the transformation layer without issues.

How are customer service and support?

dbt's customer support varies depending on which tier you are using. We ran dbt Core, which is open-source, so there is no direct vendor support. The community support is actually quite good, and the dbt Slack community is also very supportive. We found answers to most of our questions through the docs and community. If we would have been on dbt Cloud, there would be dedicated support, but the pricing did not work for our team size. For a bootstrapped FinTech team in India working on UAE compliance workflows, the cost-benefit was not there.

Which solution did I use previously and why did I switch?

Before dbt, we were handling data transformation in a more ad-hoc way. We had Python scripts and custom SQL queries scattered across our n8n workflows, our automation platform. When we extracted financial data from PDFs using Tesseract OCR and LLM extraction, we would clean and transform it using Python code nodes within n8n. There was no clear lineage. If the number was wrong downstream, tracing back through n8n workflow logic was painful.

Which other solutions did I evaluate?

We did evaluate alternatives before committing to dbt. We looked at a few options. Apache Airflow was one. It is powerful for orchestration but felt like overkill for our use case and would have required significant DevOps overhead to manage. We also considered Talend and Informatica, but those were enterprise tools with enterprise pricing, and we were a lean team at Radiant Services.

What other advice do I have?

The most concrete outcome was a significant reduction in data errors reaching our downstream AI models. Before dbt, we were catching bad extractions only after the LLM had already processed them, which meant manual rework. After implementing dbt's testing layer, we caught roughly 70% of those issues at the transformation stage itself, before they ever touched the model. Processing speed also improved because dbt's incremental models only processed the delta into new records. Our nightly reconciliation runs for UAE corporate tax workflows dropped from around 40 minutes to under 10 minutes. I would rate this product an 8 out of 10.


    JohamAlvarez-Montoya

Centralized data transformations have improved workflows but integrations still need expansion

  • April 06, 2026
  • Review from a verified AWS customer

What is our primary use case?

I am currently working with dbt and use dbt's modular SQL models.

What is most valuable?

It is very convenient because at the end, I have the opportunity to orchestrate all my transformations in just one single place, rather than having them spread out. I can use SQL, which is very convenient for the sizes of data that I usually use in the day-to-day. Of course, some other deployments might require Spark, and in this case, it is not a good idea to use SQL plus dbt, but for most cases, it is very convenient. Because it is easy to set up, and also due to the cost, I have all the transformations in one single place with a very convenient tool.

It is very convenient; I can set up the expectations really easily, all integrated in the tool that I usually use for my data transformation, so it is very convenient.

What needs improvement?

With AI, everything is advancing so fast, so I would say that the most important thing is to try to integrate with more platforms. As of now, dbt has a strong integration with AWS and with Snowflake, but I have not seen other integrations. Having more partners and having more visibility on the things that can be done is important, because I see that competitors are doing great in that aspect. For example, Databricks and Snowflake itself are doing that, so more visibility, more partnerships, and more integrations would be helpful.

For how long have I used the solution?

I have been using dbt for five years as of now.

How are customer service and support?

I may not have enough information to respond about the technical support of dbt because I have not reached out to dbt support at any time, probably because I have never needed it. In the three or four organizations that I have been with using dbt, I have not contacted the support service directly, so it may be a good sign.

How was the initial setup?

I will say the deployment for dbt is very straightforward; once you have the experience, it is quite straightforward. For example, you can set up a Docker or Docker Compose and run it, or you can use directly the on-cloud version.

I would say that the deployment for dbt requires about a day; a day could work, and with a day, you can set up and deploy dbt.

What about the implementation team?

I find that with one person, it is enough to complete the deployment; of course, it can vary depending on the complexity of the project, but I would say that one person working one day is enough.

What's my experience with pricing, setup cost, and licensing?

I mentioned the cost as one of the advantages, specifically the license cost.

Which other solutions did I evaluate?

I think the pricing is very convenient; one of the barriers is that for example, some of the companies that I have been with, dbt is a normal solution or neutral in terms of cost. It is not cheap or more expensive, but the problem is that companies are really locked in with existing vendors, and those vendors offer alternatives that might be less expensive given that they have other products. dbt only offers data transformation services, making it hard to compete with vendors that have all the packages included, such as cloud and processing services. If you compare the cost of those packages with dbt alone, it is more expensive to use dbt alone.

What other advice do I have?

I am using a private cloud. I have used both on-prem and cloud versions of the product, but mainly the managed version, the on-cloud version. That is very convenient; of course with AI, that is being commoditized a little bit. But I like it; I used it more before. Now with AI, it is even easier to do documentation, but before AI, it was really convenient to generate documentation with that tool. My overall review rating for dbt is 7 out of 10.


    Information Technology and Services

Versatile Data Transformation Tool

  • March 27, 2026
  • Review provided by G2

What do you like best about the product?
I primarily value how dbt shifts data transformation into a software engineering workflow. By materializing code into tables and views automatically, it lets our team focus on the transformation logic rather than DDL boilerplate. The model selection syntax is incredibly efficient for running specific segments of our DAG without wasting warehouse resources.

Macros and Jinja integration have also been game-changers for modularizing our logic and reducing code repetition. I find the YAML-based unit testing to be a robust way to ensure data integrity before it reaches our BI layer. Between the two, I prefer dbt Cloud over Core because the IDE provides immediate visibility into query results and schema changes, which speeds up our development cycle.

I use it every workday. Customer support was quick and responsive when I ran into issues during the initial setup of dbt with Snowflake authentication. Implementation was also straightforward when connecting it to Snowflake, and once the connection was established, I didn’t have any ongoing issues aside from needing to reauthenticate every day.
What do you dislike about the product?
The most significant friction point is the authentication lifecycle with Snowflake. The session tokens expire frequently (often every few hours), forcing a manual re-authentication process that disrupts the flow of development.

Additionally, there is a noticeable feature gap between the versions. dbt Core lacks the native, instant result-set previewing that makes dbt Cloud so productive. Bringing a similar "live preview" or integrated results pane to the Core CLI experience would make it a much more viable option for local development.
What problems is the product solving and how is that benefiting you?
We use dbt to manage the entire T (Transform) layer of our ELT process within Snowflake. Before dbt, our transformations were often opaque and difficult to test. Now, we have:

Dryer Code: Macros help us maintain a "Don't Repeat Yourself" philosophy across hundreds of models.

Improved Data Quality: By implementing automated tests on primary keys and relationship constraints, we catch upstream data issues before they impact stakeholders.

Faster Onboarding: The combination of dbt Cloud’s UI and the structured project documentation makes it much easier for new analysts to understand our data lineage.


    Ahmed Shaaban

Data teams have streamlined code-driven pipelines and now collaborate faster on shared models

  • March 02, 2026
  • Review provided by PeerSpot

What is our primary use case?

I am working with one of our enterprise customers, managing their newly established cloud warehouse. They are using Snowflake and we are using dbt to manage all the transformation and views and tables in Snowflake. I am not currently working with Cribl, but I used to work with it for almost three years. Currently, I am working with dbt and Snowflake stack.

What is most valuable?

dbt is a tool that is basically SQL and a little bit of Python, which is somewhat low entry-level, so many of the engineers can use it as well as the analysts. Multiple teams from the business side can use it as well if we allow them. Performance-wise, it mainly depends on the platform that hosts it, whether it is Snowflake or Databricks or BigQuery. There is not much complication. Of course, there are the benefits of having code, so you have a software development lifecycle; you can use version control, testing, and documentation.

I would say the best feature or the most desirable feature for dbt is the ability to write everything in code. It is treating data the same way that Ansible did or Terraform did for infrastructure as code. Now you can code the pipeline instead of using SSIS and Apache NiFi and even Informatica PowerCenter. All of these tools are GUI-based tools. They have a low entry barrier, but you cannot really integrate them in a CI/CD pipeline, for example. For dbt, we can create those. More recently with the advances in AI, LLMs, and code assistant agents, we can hugely leverage those in dbt because I can simply ask the agent to write the code or write the model. However, you cannot really ask them to draw any SSIS package or an Apache NiFi flow, for example.

I think that dbt helps us quite a bit because it exposes a little bit of the functionality of Snowflake directly to us. We can use it with ease because we have some experience with Snowflake and we know what controls to adjust. Because we are a team of multiple individuals, we need to collaborate. Without version control, you have to manage the whole codebase one feature at a time, but what we do is we can use branching and different feature branches. Each one of us is working on their own feature branch. We collaborate, we merge our changes, and we can roll back in case we introduced some bugs. I would say the version control feature is a huge bonus or a huge plus.

What needs improvement?

We are still experimenting with testing, but not that much. We are not using some features yet. We are trying to introduce them because we are coming from a background of SSIS. The team used to work with SSIS, Microsoft SQL Server Integration Services. We are still adapting one feature at a time. Currently, we are working with the SQL modules and with the Jinja templating. We are experimenting with testing, but I would say towards the end of this year, we are planning to explore more of the documentation and the data lineage options as well.

I would say the benefits are coming from GUI-based tools like SSIS. We have more control on the codebase. We can create something of a system where we can use macros and templating, speeding up the development cycle. We are now trying to introduce a little testing, and also we are using some sort of a CI/CD cycle, so continuous integration and continuous deployment. I do not believe that these kinds of features are that common as a package as a whole package. dbt excels in that area.

I used to have a couple of notes about the performance, but lately I have discovered something called dbt Fusion, which, according to dbt Labs, they proclaim is much faster during the parsing of dbt models. However, I would love to see even more of an out-of-the-box solution regarding the testing. They are treating the testing in a good way so far, but I would love to see even more improvement because the whole data testing field is not very mature. It is not the same as software testing; for example, you have test suites, test tools, and profilers, but for data testing, it is not yet that advanced. I would love for dbt to take the lead on that.

For how long have I used the solution?

I have been using dbt since September 2024, so almost a year and a half.

What do I think about the stability of the solution?

I think that one of the issues with dbt is upgrading to later versions because we have some functionalities that we have designed that overcome default behaviors for dbt. Every upgrade is a little bit of a risk for us because we do not know if the workarounds that we developed will be available for the next version. However, in terms of stability, we have had no issue.

What do I think about the scalability of the solution?

I would say we have not experienced scalability issues so far. I am not aware of the scalability, but we are managing it on a very large scale. The bottlenecks that we have are not coming from dbt; they are coming from Snowflake. Once we scale up Snowflake, dbt has no issue whatsoever.

How are customer service and support?

Besides the issue with the upgrades and the default behaviors for the macros that we overwrote, we did not really need to reach out to dbt support.

Which solution did I use previously and why did I switch?

The team used to work with SSIS before I joined. They used to work with it, I believe, two or three years ago, but since I joined a year and a half ago, they switched to dbt. They switched to dbt just before I joined. I have also worked for some time with Apache NiFi as well.

How was the initial setup?

I am not aware of the initial setup because they set up everything before I joined, but they are using dbt Cloud. I do not think there are many difficulties or any hurdles to overcome during setup. You simply link your dbt Cloud account with the Snowflake account and that is it.

What other advice do I have?

In terms of metrics, I do not have exact metrics, but I get a sense of the speed of opening and closing data requests. I am not that familiar with the Scrum Master of our squad, but I believe our burn chart or something like that, which is an agile metric that measures the finished user stories, is the only sense or only kind of metrics that we have at the moment. However, you do get a sense of accomplishment and the speed of delivering value.

I would say just the testing is something to focus on. dbt Fusion is something I am not completely aware of, but I need to try it because I think it is a great feature, especially because we are dealing with multiple models. For our use case, we are dealing with 50 plus, almost 100 models. Many models are running at the same time. If you add up all the compile time and parsing time, it can add up to quite a bit. dbt Fusion promises that the parsing is much faster in one-tenth of the time, I believe.

I would say you really need to take care of your model and your data model because dbt gives you some freedom. If you do not really know what you are going to do, you can really mess things up. So you need to take care of the model, design your layers, define the responsibilities of each layer, define the criteria of each data layer, define the tests, and that is it.

I would rate this product an eight out of ten.