My main use case for Dataiku is general; I create ETL pipelines and then automate everything using that, along with ML modeling. These are the major use cases that I have for Dataiku. On a daily basis, we use Dataiku for ad hoc analysis for following the product lifecycle.
For one of my use cases with Dataiku, we are using it where the data resides in Snowflake and the expectation is to orchestrate and automate a complete CI/CD pipeline, with the final data residing on S3. In between, there are multiple logics and transformations that we have to build in. Along with that, we are supposed to do all the DQ checks, data quality framework, and data governance. We automated everything using Dataiku, and now the project is live, with overall efficiency being very good.
Automating that workflow with Dataiku increased the overall productivity of the team compared to the tasks that we used to do earlier using other ETL tools. Dataiku has optimized that, and data visualization became easy. The checkpoints that Dataiku provides, such as analyzing the data and finding the outliers, became easy, and sharing the data sets became easy as well. Now, with the visual recipes, even people who can't code can also do the transformation, so overall, it is a good tool.
We have created a few visual recipes that are not only limited to the project; we created a package so that they don't have to code that part of logic again and again. We have provided them as a recipe, which is a good thing.