Machine Learning (ML) is quickly becoming integrated into many production environments, both physical and virtual. Managing these ML production systems with best practices, proper architecture, redundancy, and scalable systems is a necessary step to harden production. Reliability, ease of operation, and maintainability are increased when implementing the proper development operations standards. Adding new capabilities to ML environments to accelerate ML workflows and improve insights.

AWS Services

Purpose-built cloud products

AWS Solutions

Ready-to-deploy solutions assembling AWS Services, code, and configurations

Showing results: 1
Total results: 1
  • Publish Date
  • MLOps Workload Orchestrator

    The MLOps Workload Orchestrator solution helps you streamline and enforce architecture best practices for machine learning (ML) model productionization.

1

Partner Solutions

Software, SaaS, or managed services from AWS Partners

Showing results: 1-2
Total results: 2
  • Publish Date
  • Databricks Lakehouse Platform

    Simple, collaborative and open platform: This unified platform simplifies your data architecture by eliminating the data silos that traditionally separate analytics, data science and machine learning. It’s built on open source and open standards to maximize flexibility.
  • Hugging Face Platform

    The Hugging Face Platform enables premium features for your organization on the Hugging Face Hub, including Inference Endpoints, Spaces Hardware Upgrades, and AutoTrain. With Inference Endpoints, you can securely deploy models from the Hugging Face Hub and custom containers on managed autoscaling infrastructure: - Optimized for LLMs: high throughput and low latency, powered by Text Generation Inference. - Deploy models as production-ready APIs with just a few clicks. No MLOps, no infrastructure to manage. - Automatic scale to zero capability for maximum cost efficiency. - Security first: we support direct connections to your private VPC. We have the SOC2 Type 2 certification and offer GDPR and BAA data processing agreements. - Out-of-the-box support for Hugging Face Transformers, Sentence-Transformers, Diffusers, and easy customization. Run inference at scale with any Machine Learning task and library. With Spaces, you can easily create and host any Machine Learning application, GPUs and batteries included: - Build ML apps and host them on Hugging Face. - Showcase projects, create an ML portfolio, and collaborate with others in your organization. - Wide range of frameworks supported: Gradio, Streamlit, HTML + JS, and many more with Docker. - Upgrade to GPU and accelerated hardware in just a few clicks. With AutoTrain, you can train state-of-the-art models with just a few clicks: - No-code tool to train state-of-the-art NLP, CV, Speech, and Tabular models without machine learning expertise. - Train custom models on your datasets without worrying about the technical details of model training. All Hugging Face services use a usage-based, pay-as-you-go pricing. Check out our pricing here: https://huggingface.co/pricing Inference Endpoints: https://huggingface.co/pricing#endpoints Spaces: https://huggingface.co/pricing#spaces AutoTrain: https://huggingface.co/pricing#autotrain
1
Back to top 

Guidance

Prescriptive architectural diagrams, sample code, and technical content

Showing results: 1-6
Total results: 7
  • Publish Date
1 2
Back to top