Overview
WhyLabs is the essential AI Observability Platform for model and data health. It is the only machine learning monitoring and observability platform that doesn't operate on raw data, which enables a no-configuration solution, privacy preservation, and massive scale.
Machine learning engineers and data scientists rely on the platform to monitor ML applications and data pipelines by surfacing and resolving data quality issues, data bias, and concept drift. These capabilities help AI builders reduce model failures, avoid downtime, and ensure customers are getting the best user experience. With out-of-the-box anomaly detection and purpose-built visualizations, WhyLabs eliminates the need for manual troubleshooting and reduces operational costs.
The platform can monitor tabular, image, and text data. It integrates with many popular ML and data tools including Pandas, Apache Spark, AWS Sagemaker, MLflow, Flask, Ray, RAPIDS, Apache Kafka, and more. To learn more about what data types WhyLabs can work with and which tools we integrate with, check out the whylogs GitHub page: https://github.com/whylabs/whylogs
WhyLabs was created at the Allen Institute for Artificial Intelligence (AI2) by Amazon Machine Learning alums and is backed by Andrew Ng's AI Fund.
For custom pricing, EULA, or a private contract, please contact AWSMarketplace@whylabs.ai for a private offer.
Highlights
- Enable data and model monitoring quickly and securely: Automated monitoring and alerting across dozens of "data vitals" with out-of-the-box configurations and lightweight integrations. Cloud agnostic, built with AWS-grade privacy and security. Integration takes less than an hour.
- Deliver the impact models were designed for: Improve model performance, resilience, and auditability with alerting and reporting tools. Monitor model inputs, outputs, performance as well as upstream data quality in one platform.
- Achieve AI Governance across the organization: Track all relevant metrics associated with the data that flows through Al applications. Enabling observability in Al applications is key for achieving Al Governance best practices.
Details
Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/month |
---|---|---|
1 Model (free tier) | Monitoring for one model | $0.00 |
2 Models | Monitoring for two models | $100.00 |
3 Models | Monitoring for three models | $200.00 |
4 Models | Monitoring for four models | $300.00 |
5 Models | Monitoring for five models | $400.00 |
WhyLabs Enterprise | Enterprise contract with model monitoring at scale | $8,333.33 |
Vendor refund policy
No refunds
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Software as a Service (SaaS)
SaaS delivers cloud-based software applications directly to customers over the internet. You can access these applications through a subscription model. You will pay recurring monthly usage fees through your AWS bill, while AWS handles deployment and infrastructure management, ensuring scalability, reliability, and seamless integration with other AWS services.
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Customer reviews
Excellent tool for ML Monitoring with many out-of-the box solutions
Developed efficient solutions for optimizing ERP workflows through data analysis
Reliable AI Monitoring with Some Complexity
Self-Serve Observability Platform
whylogs seemed like the perfect choice for a consultant that clients did not want to entirely release their data to; I found that it only captures the profile and stats info instead of the raw data here.
Rcently, I started testing out LLM security features with LangKit and I cannot believe how quick it is to use. I followed a workshop few months ago that showed me how to detect jailbreak attempts and toxicity in LLM inputs and outputs using LangKit. Took that learning and now with a client's project, we have tested out logging the telemetary data from the evaluation to WhyLabs. Looks good so far, so once I upgrade the pricing limit for this client, we plan to scale our usage here. Excited about this one.