Denodo is designed for a distributed data fabric. Instead of a centralized data lake, we are in a distributed data mesh. Whenever a team wants to publish some data product, they are publishing data through Denodo, and their data store might be different; it might be in Azure SQL, in an Azure Databricks data table, or in ADLS.
Whenever a user is looking for data, instead of connecting to each individual data source, they can only pull data through Denodo views, which is a virtualization layer. Role-based access control can be implemented. Not everybody can pull data; whoever is looking for data and whatever data they are looking for, that approval we can give, and based on that, they can pull.
For real-time analytics, we have an Azure Kafka endpoint. There are some use cases with the Azure Kafka endpoint; we have transferred data through that, and through Databricks, we have transformed the data and stored it in some persistent layer such as a Databricks data table or any other data store.