Overview
ML Works is a Machine Learning Observability and Monitoring accelerator that uses open-source tools that help automate your monitoring workflows and provide curated insights into the performance, drift, and data quality of your ML Models in production.
The features that ML Works provides are across 4 dimensions:
1) Performance Monitoring: Automated pipelines are created using Argo and Kubernetes to monitor the performance and errors of your production models across multiple dimensions like prediction cohorts, feature cohorts, and performance trends over time. This includes standard metrics out of the box like accuracy, precision, recall, RMSE, MAPE, WAPE, etc., and additional custom metrics can also be added.
2) Drift Detection: MLWorks also provided automated data drift detection that helps to track your data drift across the features used in the model, model drift to track your predictions drifting over time, and a quick look into how important features are drifting to take informed retraining decisions.
3) Model Interpretability: Now understand how your models think to make a prediction across multiple levels, like global or model-level feature significance, cohort level significance, and row-level significance, to validate the training hypothesis and ensure that the model is still predictable across multiple levels and/or course correct models in subsections of your data where the model is unpredictable.
4) Data Quality: The performance of a model is only as good as the quality of the data that is being fed into it. With MLWorks Data Quality Monitoring, monitor your model data and all dependent upstream data sources and tables to validate and assess the data quality across dimensions like completeness, uniqueness, validity, and timely update. Are these dimensions not good enough? Now add your custom rules to monitor specific business validations across all these sources and get custom monitoring alerts as well.
The following AWS resources are used in ML Works deployment:
**Data Storage **
- AWS S3 - It can be utilized for storing all the train test data, model files and data sets.
- AWS MongoDB Atlas - It is needed to store the onboarding and insights data for the UI.
Architecture
- AWS EKS - It is used to deploy front-end, back-end and orchestration tool for ML Works.
- AWS ECR - It is needed to store the docker image files for UI, back-end and in-built libraries.
Orchestration
- AWS Lambda functions, Step functions, Glue (Optional) – ML Works uses Argo as an orchestration tool, and we can also use AWS Lambda, Step Functions and Glue to orchestrate the entire flow.
Highlights
- Ease of Integration: MLWorks can integrate well with your existing ML models registered with SageMaker and help to quickly onboard this on the MLWorks UI with some quick configurations to automate the end-to-end observability for your production models. It reduces monitoring effort by 30% by providing a tool that integrates well with your existing environment.
- Highly customizable with a productized approach: Don't you see the metrics that your business needs? Follow our library approach in the codebase to add metrics that can now be reused across multiple solutions in your organization. MLWorks was able to bring DS, DE, and business teams to the same platform and monitor metrics of relevance.
- Centralized Dashboard for Monitoring: Monitor all your production models in a centralized location and get insights immediately into your ML investments rather than waiting for days to get reports individually.
Details
Pricing
Custom pricing options
Legal
Content disclaimer
Resources
Vendor resources
Support
Vendor support
For assistance, please contact the Tredence Alliances Team at Alliances@Tredence.com .