Machine Learning (ML) is quickly becoming integrated into many production environments, both physical and virtual. Managing these ML production systems with best practices, proper architecture, redundancy, and scalable systems is a necessary step to harden production. Reliability, ease of operation, and maintainability are increased when implementing the proper development operations standards. Adding new capabilities to ML environments to accelerate ML workflows and improve insights.
AWS Services
Purpose-built cloud products
![](https://d1.awsstatic.com/Gradient-Divider-orange-blue.317b0a6e1db69aa03ede8c5fd6fad7ee117a626f.jpg)
Total results: 2
- Alphabetical (A->Z)
AWS Solutions
Ready-to-deploy solutions assembling AWS Services, code, and configurations
![](https://d1.awsstatic.com/Gradient-Divider-orange-blue.317b0a6e1db69aa03ede8c5fd6fad7ee117a626f.jpg)
Partner Solutions
Software, SaaS, or managed services from AWS Partners
![](https://d1.awsstatic.com/Gradient-Divider-orange-blue.317b0a6e1db69aa03ede8c5fd6fad7ee117a626f.jpg)
Total results: 2
- Publish Date
-
Hugging Face Platform
The Hugging Face Platform enables premium features for your organization on the Hugging Face Hub, including Inference Endpoints, Spaces Hardware Upgrades, and AutoTrain. With Inference Endpoints, you can securely deploy models from the Hugging Face Hub and custom containers on managed autoscaling infrastructure: - Optimized for LLMs: high throughput and low latency, powered by Text Generation Inference. - Deploy models as production-ready APIs with just a few clicks. No MLOps, no infrastructure to manage. - Automatic scale to zero capability for maximum cost efficiency. - Security first: we support direct connections to your private VPC. We have the SOC2 Type 2 certification and offer GDPR and BAA data processing agreements. - Out-of-the-box support for Hugging Face Transformers, Sentence-Transformers, Diffusers, and easy customization. Run inference at scale with any Machine Learning task and library. With Spaces, you can easily create and host any Machine Learning application, GPUs and batteries included: - Build ML apps and host them on Hugging Face. - Showcase projects, create an ML portfolio, and collaborate with others in your organization. - Wide range of frameworks supported: Gradio, Streamlit, HTML + JS, and many more with Docker. - Upgrade to GPU and accelerated hardware in just a few clicks. With AutoTrain, you can train state-of-the-art models with just a few clicks: - No-code tool to train state-of-the-art NLP, CV, Speech, and Tabular models without machine learning expertise. - Train custom models on your datasets without worrying about the technical details of model training. All Hugging Face services use a usage-based, pay-as-you-go pricing. Check out our pricing here: https://huggingface.co/pricing Inference Endpoints: https://huggingface.co/pricing#endpoints Spaces: https://huggingface.co/pricing#spaces AutoTrain: https://huggingface.co/pricing#autotrain
Guidance
Prescriptive architectural diagrams, sample code, and technical content
![](https://d1.awsstatic.com/Gradient-Divider-orange-blue.317b0a6e1db69aa03ede8c5fd6fad7ee117a626f.jpg)
Total results: 7
- Publish Date
-
Multi-Tenant, Generative AI Gateway with Cost and Usage…
This Guidance demonstrates how to build an internal Software-as-a-Service (SaaS) platform that provides access to foundation models, like those available through Amazon Bedrock, to different business units or teams within your organization. -
Generative AI Deployments using Amazon SageMaker JumpStart
This Guidance demonstrates how to deploy a generative artificial intelligence (AI) model provided by Amazon SageMaker JumpStart to create an asynchronous SageMaker endpoint with the ease of the AWS Cloud Development Kit (AWS CDK). -
Processing Overhead Imagery on AWS
This Guidance demonstrates how to process remote sensing imagery using machine learning models that automatically detect and identify objects collected from satellites, unmanned aerial vehicles, and other remote sensing devices.