Sign in
Categories
Your Saved List Become a Channel Partner Sell in AWS Marketplace Amazon Web Services Home Help
ProServ

Overview

ML Works is a Machine Learning Observability and Monitoring accelerator that uses open-source tools that help automate your monitoring workflows and provide curated insights into the performance, drift, and data quality of your ML Models in production.

The features that ML Works provides are across 4 dimensions:

1) Performance Monitoring: Automated pipelines are created using Argo and Kubernetes to monitor the performance and errors of your production models across multiple dimensions like prediction cohorts, feature cohorts, and performance trends over time. This includes standard metrics out of the box like accuracy, precision, recall, RMSE, MAPE, WAPE, etc., and additional custom metrics can also be added.

2) Drift Detection: MLWorks also provided automated data drift detection that helps to track your data drift across the features used in the model, model drift to track your predictions drifting over time, and a quick look into how important features are drifting to take informed retraining decisions.

3) Model Interpretability: Now understand how your models think to make a prediction across multiple levels, like global or model-level feature significance, cohort level significance, and row-level significance, to validate the training hypothesis and ensure that the model is still predictable across multiple levels and/or course correct models in subsections of your data where the model is unpredictable.

4) Data Quality: The performance of a model is only as good as the quality of the data that is being fed into it. With MLWorks Data Quality Monitoring, monitor your model data and all dependent upstream data sources and tables to validate and assess the data quality across dimensions like completeness, uniqueness, validity, and timely update. Are these dimensions not good enough? Now add your custom rules to monitor specific business validations across all these sources and get custom monitoring alerts as well.

The following AWS resources are used in ML Works deployment:

**Data Storage **

  • AWS S3 - It can be utilized for storing all the train test data, model files and data sets.
  • AWS MongoDB Atlas - It is needed to store the onboarding and insights data for the UI.

Architecture

  • AWS EKS - It is used to deploy front-end, back-end and orchestration tool for ML Works.
  • AWS ECR - It is needed to store the docker image files for UI, back-end and in-built libraries.

Orchestration

  • AWS Lambda functions, Step functions, Glue (Optional) – ML Works uses Argo as an orchestration tool, and we can also use AWS Lambda, Step Functions and Glue to orchestrate the entire flow.
Sold by Tredence Inc
Categories
Fulfillment method Professional Services

Pricing Information

This service is priced based on the scope of your request. Please contact seller for pricing details.

Support

For assistance, please contact the Tredence Alliances Team at Alliances@Tredence.com.