AWS Storage Blog

Optimizing enterprise MLOps in the cloud with Domino Data Lab and Amazon Elastic File System

Domino Data Lab is an AWS Partner Network (APN) partner that provides a central system of record for data science activity across an organization. The Domino solution delivers orchestration for all data science artifacts, including AWS infrastructure, data and services.

As part of the solution, Domino’s platform leverages the scale, security, reliability, and cost-effectiveness of AWS cloud computing coupled with Amazon Elastic File System (Amazon EFS). Together they orchestrate all data science artifacts, such as AWS infrastructure, data, and services. This approach lets data science teams benefit from this flexible and collaborative research environment with automated workflows tracking model development dependencies for full reproducibility, including enterprise-grade governance, risk management, and granular cost controls.

In this post, we interview David Schulman, Director of Partner Marketing at Domino Data Lab, and explore the Domino Data lab “Enterprise AI Platform” solution to consider why centralizing data AI and machine learning operations (MLOps) initiatives into a single system of records across teams can help enterprises to work faster, deploy results sooner, scale rapidly and reduce regulatory and operational risk. In 2023, Domino surveyed artificial intelligence (AI) professionals with their REVelate “State of Generative AI” survey . The responders included AI professionals leading, developing, and operating AI across Fortune 500 companies. The survey reports that 49% plan to develop generative AI in-house, while 42% plan to fine-tune commercial models. Top limitations focused on security, reliability, cost, and IP protection. Consequently, 69% are worried about data leakage, with both top leadership (82%) and IT (81%) being especially concerned.

Interview with David Schulman, Director of Partner Marketing, Domino Data Lab

What are the challenges of operating AI at scale? How can users balance innovation, governance, and costs?

According to Ventana Research, “Through 2026, nearly all multinational organizations will invest in local data processing and infrastructure and services to mitigate against the risks associated with data transfer. “Hybrid cloud environments complicate the operationalizing of AI/ML at scale by creating silos across data, infrastructure, and tooling. Challenges include data science teams that are prevented from collaboration because of siloed data, processes, and tools. Non-standardized, non-repeatable, ad-hoc bespoke workflows result in sprawl on individuals’ computers across systems. Data and compute resources are distributed across cloud and on-premises data centers, causing unconnected environments and silos. There are also hidden costs resulting from data scientists’ spending time on DevOps and infrastructure management tasks to prevent underutilized infrastructure due to idle, always-on, and over-provisioned resources.

How does Domino break down data silos and deliver unified, containerized, end-to-end ML operations (MLOps)?

Over years of partnership, Domino and AWS have worked together to assist organizations, such as Johnson & Johnson (JnJ), in reducing analysis time by 35% for data scientists[1].  This involves integrating Domino’s data science platform with essential AWS services, such as Amazon EFS. Amazon EFS provides analytics storage with shared file access to data scientists. Applications include open-source genomics and Shiny, and Domino Data Lab, run on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Amazon EFS provides access to a fully managed petabyte-scale file system supporting genomics sequence data at 500 TB. More recently, JnJ has further scaled data science across their hybrid and multicloud environment, adopting AI infrastructure strategies straddling on-premises data centers and the cloud to address concerns over cost, security, and regulatory compliance. Lilly also centralizes data science to drive value across the healthcare value chain, as discussed last spring on a panel at NVIDIA’s GTC AI Developer conference.

How is the Domino Data Enterprise AI Platform integrated with AWS?

Domino’s Enterprise AI Platform – integrated with key AWS services – provides a unified, collaborative, governed, and end-to-end MLOps platform. The solution orchestrates the complete ML lifecycle, providing easy access to data, preferred tools, and infrastructure in any environment. By sharing knowledge, automating workflows across teams, and tracking all changes and dependencies, Domino guarantees complete reproducibility while fostering collaboration. It also helps maintain peak model performance in production while making sure of enterprise-grade model governance and cost-savings. Domino can be deployed into VPCs, or it is available as a SaaS offering on AWS Marketplace. Attributes include:

Domino is Kubernetes-native and can be deployed on Amazon EKS for ease of management across hybrid environments. This enables cloud-native scalability, multi-cloud portability, reduced costs through elastic workloads paired with underlying hardware resources, and simplified administration by integration with existing DevOps stacks.

    • Domino uses a dedicated Auto Scaling Group (ASG) of Amazon EKS workers to host the Domino platform. ASGs of Amazon EKS workers host elastic compute for Domino executions.
    • Amazon Simple Storage Service (Amazon S3) stores all project files, such as user data, an internal Docker registry, backups, and logs, while Amazon EFS stores Domino Datasets.
    • AWS Identity and Access Management (IAM) is used for identity/security management, Amazon Elastic Block Store (Amazon EBS) for block-level storage that can be attached to compute instances, and Amazon Virtual Private Cloud (Amazon VPC) for a logically isolated virtual network.
    • Amazon EFS supports up to 250,000 read IOPS and up to 50,000 write IOPS per file system, making it easier to power IOPS-intensive file workloads on AWS. With these enhancements, you can easily run more IOPS-demanding workloads on Amazon EFS, such as big data processing with Domino Data Lab.

What is required to run a Domino cluster?

Domino can run on a Kubernetes cluster provided by AWS Elastic Kubernetes Service. When running on EKS, the Domino architecture uses AWS resources to fulfill the Domino cluster requirements as in Figure 1.

Domino Figure 1

Figure 1: View Domino Documentation for Domino on Amazon EKS

Seamless collaboration and knowledge sharing is a requirement for data science teams. First, Domino Datasets, integrated with Amazon EFS provide high-performance, versioned, and structured filesystem storage in Domino so that data scientists can build curated pipelines of data in one project and share them with colleagues for collaboration. Amazon EFS enables the sharing of data pools among multiple instances that were previously isolated from one another. This increases data science team productivity because Domino not only tracks snapshots of data used to build models, but all of the underlying code, packages, environments, and all supporting artifacts for full reproducibility – providing rich file difference information between revisions. Additionally, customers such as JnJ value the Amazon EFS storage class feature which enables them to automatically move data from Amazon EFS Standard to Amazon EFS Infrequent Access. By automating the process of moving data to long-term, cost-effective storage, the customer successfully reduced their storage cost.

With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files to dynamically provide storage capacity to your applications as needed. With elastic capacity, provisioning is automatic, and you’re billed only for what you use. Amazon EFS is designed to be highly scalable both in storage capacity and throughput performance, growing to petabyte scale and allowing massively parallel access from compute instances to your data. This makes it the perfect data science platform foundation for organizations such as JnJ to reduce analysis time by 35%.

Why does this matter? With Amazon EFS, data science teams are empowered to:

  • Easily troubleshoot issues and make sure of reproducibility for audit purposes.
  • Find and reuse part work, streamlining iterative model processes and new hire onboarding.
  • Define standard data assets with a feature store and file-based Domino Datasets for consistency across the organization.
  • Leveraging the Amazon EFS storage class to lower the cost.

Second, Domino Data Sources act as a structured mechanism to create and manage connection properties for external sources such as Amazon S3, Amazon Redshift, and a variety of sources. This reduces DevOps work for data scientists, as they get desktop-like data store connectivity without needing any coding.

  1. Self-service, governed infrastructure keeps AI teams productive while minimizing costs, abstracting away DevOps work from the data scientists’ perspective, while centralizing governance through pre-configured compute environments from IT’s perspective. Domino administrators can assign hardware tiers to specific teams, specifying CPU, memory, and GPU allocation. Domino automates Amazon EC2 to spin machines up and down to meet demand, letting data scientists run more experiments in parallel (with auto-scaling across popular compute cluster types such as Spark, Ray, and Dask), while IT can monitor and control usage costs.
  2. Robust model governance and risk management means organizations can operationalize AI at scale responsibly. Domino’s unified platform enables organizations to monitor and manage any ML model, wherever it runs, while mitigating compliance risks.

Flexible model deployment options support diverse business and operational requirements. Models developed in Domino can be exported for deployment in Amazon SageMaker for scalable and low latency hosting, while models from SageMaker and SageMaker Autopilot can be accessed and monitored inside Domino for drift and prediction performance. Models can be deployed to the cloud, in-database (deploy to Snowflake or Databricks for predictive analytics), or to the edge (Domino supports NVIDIA Fleet Command). Models can be deployed for both batch and real-time predictions at scale, while Domino Model Sentry controls model validation, review, and approval processes for an additional governance layer.

Hybrid cloud support is a necessity for enterprise data science teams, and Domino Nexus acts a single-pane of glass for all data science workloads across hybrid and multi-cloud environments.

Figure 2: Domino/AWS architecture

Figure 2: Domino/AWS architecture

A Domino Nexus deployment consists of a “control plane,” a Kubernetes cluster hosting Domino’s core platform services (deployed on Amazon EKS as in the preceding Figure 2) and one or multiple “data planes”: distinct Kubernetes clusters that run a small set of Domino services that can execute workloads. These can be deployed in any cloud region, across multiple clouds, or in on-premises data centers.

Figure 3: Nexus Hybrid Architecture | AWS Cloud Control Plane (US East Region), AWS Cloud Data Plane (EU Central Region)

Figure 3: Nexus Hybrid Architecture | AWS Cloud Control Plane (US East Region), AWS Cloud Data Plane (EU Central Region)

As shown in Figure 3, Users connect to the Domino control plane through a browser connection, while users connect directly to the data plane where they are doing their model development work in a Domino Workspace. Amazon Elastic Load Balancer (Amazon ELB) allows ingress to Domino control plane services from data planes.

This architecture (Figure 4), eliminates the possibility of inadvertently transferring region-locked data. It also allows data scientists to seamlessly “burst” to the cloud if they run out of on-premises compute capacity.

Figure 4: Source: Nexus hybrid Architecture Source

Figure 4: Source: Nexus hybrid Architecture Source


What are the benefits of the Domino Enterprise AI Platform?

Domino’s Enterprise AI Platform is proven to deliver an average of 722% ROI in three years (average NPV and ROI based on study of Domino customers, using Domino Business Value Assessment). This is achieved by 2x more models delivered with the same resources, in the same amount of time. A 40% reduction in data scientist time wasted waiting for IT support, doing DevOps, or duplicating work. A 40% reduction in IT and cloud infrastructure costs over three years. Reduced risks of revenue loss from violations of regulations or reputation issues.

Want to go deeper into these metrics? Learn more about Domino and cost-effective AI. Read the following case studies from enterprises with Domino’s platform deployed on AWS:

  • “By testing 5x as many potential new seeds without additional costs, and iterating on models 4x as often through the agricultural process, Bayer has generated more than $100 million in NPV in 3 years [by using Domino].” – Data Science Leader, Bayer.
  • “Domino centralizes assets required to build data models, helping drive 10x – 100x efficiency gains and reducing the costs of ML and AI projects by $20 million a year.” – CDAO of Enterprise Operations, Lockheed Martin.
  • Moody’s Analytics experienced an 80% reduction in model deployment time.

What would you like users to takeaway?

Although generative AI gets all of the attention, large language models (LLMs) are in fact just models. And although there are many intricacies in operating generative AI at scale, such as prompt engineering, model fine-tuning, and inference/hosting (we’ll save that all for another post), the following best practices of “scalable, enterprise AI” remain the same:

  • Centralize data science and AI/ML initiatives into a single system of record across teams.
  • Data scientists’ time is expensive, and governed, self-service infrastructure, tooling, and data access is essential. Focus them on model innovation, not DevOps.
  • Collaboration is key, such as shared data access, tools, compute, models, and projects for full reproducibility and quick time-to-value.
  • Robust model governance is essential, and full traceability and auditability make this possible.

Flexibility is required by modern enterprises, who need to build, deploy, and operate AI at scale across a variety of complex architectures. In addition, storage plays an important role, and Amazon EFS provides a cost-effective, elastic, and highly performant solution for your ML inferencing workloads. You only pay for each inference that you run and the storage consumed by your model on the file system. Amazon EFS provides petabyte-scale elastic storage so your architecture can automatically scale up and down based on the needs of your workload without requiring manual provisioning or intervention.

Learn More

To learn more about how Domino on AWS can accelerate responsible AI initiatives? Download the “Strategies and Practices for Responsible AI” TDWI playbook for insight on a proactive approach on identifying and mitigating business, legal, and ethical risks to create trust and deliver tangible business value.

Visit Domino on AWS Marketplace.

[1] AWS Storage blog – Johnson & Johnson reduces analysis time by 35% with their data science platform using Amazon EFS

David Schulman

David Schulman

David Schulman is a data and analytics ecosystem enthusiast in Seattle, WA. As Director of Global Partner Marketing at Domino Data Lab, he works with closely with other industry-leading ecosystem partners on joint solution development and go-to-market efforts. Prior to Domino, David lead Technology Partner marketing at Tableau, and spent years as a consultant defining partner program strategy and execution for clients around the world.

Rumi Olsen

Rumi Olsen

Rumi Olsen is a Solutions Architect and also leads an AI/ML Solutions Architect team in the AWS Partner Program. Rumi collaborates closely with top machine learning ISV partners, leveraging AWS ML services to elevate their products and spearheading strategic engagements.