Select your cookie preferences

We use essential cookies and similar tools that are necessary to provide our site and services. We use performance cookies to collect anonymous statistics, so we can understand how customers use our site and make improvements. Essential cookies cannot be deactivated, but you can choose “Customize” or “Decline” to decline performance cookies.

If you agree, AWS and approved third parties will also use cookies to provide useful site features, remember your preferences, and display relevant content, including relevant advertising. To accept or decline all non-essential cookies, choose “Accept” or “Decline.” To make more detailed choices, choose “Customize.”

Hugging Face Inc.
The AI community building the future
AWS Partner Highlights
2 AWS Competencies
50+ AWS Customer Launches
At Hugging Face our mission is to democratize good, state of the art machine learning. We do this through our open source, our open science and our products and services. Currently we have over 20k State-of-the-art transformer models and over 1600 free and available datasets on our open source and Machine Learning platform.
AWS Partner Website
fr
Headquarters
Paris
9 rue des Colonnes, 75002 Paris, France.
Paris, France 75002
Share Feedback
AWS Partner descriptions are provided by the AWS Partner and are not verified by AWS.
AWS Validated Qualifications
AWS Competencies
  • Generative AI Services Competency
  • Machine Learning ISV Competency
Solutions (4)

Sorted by: A-Z

solution validation level iconFoundational

Hugging Face Model Inference on Amazon SageMaker
With the Hugging Face Inference DLCs and new Inference Toolkit for Amazon SageMaker, you can leverage the pipelines from the Transformers library to allow zero-code deployments of models, and create production-ready endpoints that scale easily within your AWS environment with built-in monitoring.
solution validation level iconAdvanced

Machine Learning ISV Competency

Hugging Face Model Training on Amazon SageMaker
With Hugging Face on Amazon SageMaker, you can fine-tune any of the 10,000+ pre-trained models from Hugging Face, an open-source provider of NLP, Speech and Computer Vision models known as Transformers, reducing the time it takes to train these State of the Art models from weeks to minutes.
solution validation level iconFoundational

Hugging Face Neuron Deep Learning AMI
Hugging Face Neuron Deep Learning AMI (DLAMI) makes it easy to use Amazon EC2 Inferentia & Trainium instances for efficient training and inference of Hugging Face Transformers and Diffusers models. With the Hugging Face Neuron DLAMI, scale your Transformers and Diffusion workloads quickly on Amazon EC2 while reducing your costs, with up to 50% cost-to-train savings over comparable GPU-based DLAMIs. This DLAMI is the officially supported, and recommended solution by Hugging Face, to run training and inference on Trainium and Inferentia EC2 instances, and supports most Hugging Face use cases, including: Fine-tuning and pre-training Transformers models like BERT, GPT, or T5 Running inference with Transformers models like BERT, GPT, or T5 Fine-tuning and deploying Diffusers models like Stable Diffusion
solution validation level iconFoundational

Hugging Face Platform
The Hugging Face Platform enables premium features for your organization on the Hugging Face Hub, including Inference Endpoints, Spaces Hardware Upgrades, and AutoTrain. With Inference Endpoints, you can securely deploy models from the Hugging Face Hub and custom containers on managed autoscaling infrastructure: - Optimized for LLMs: high throughput and low latency, powered by Text Generation Inference. - Deploy models as production-ready APIs with just a few clicks. No MLOps, no infrastructure to manage. - Automatic scale to zero capability for maximum cost efficiency. - Security first: we support direct connections to your private VPC. We have the SOC2 Type 2 certification and offer GDPR and BAA data processing agreements. - Out-of-the-box support for Hugging Face Transformers, Sentence-Transformers, Diffusers, and easy customization. Run inference at scale with any Machine Learning task and library. With Spaces, you can easily create and host any Machine Learning application, GPUs and batteries included: - Build ML apps and host them on Hugging Face. - Showcase projects, create an ML portfolio, and collaborate with others in your organization. - Wide range of frameworks supported: Gradio, Streamlit, HTML + JS, and many more with Docker. - Upgrade to GPU and accelerated hardware in just a few clicks. With AutoTrain, you can train state-of-the-art models with just a few clicks: - No-code tool to train state-of-the-art NLP, CV, Speech, and Tabular models without machine learning expertise. - Train custom models on your datasets without worrying about the technical details of model training. All Hugging Face services use a usage-based, pay-as-you-go pricing. Check out our pricing here: https://huggingface.co/pricing Inference Endpoints: https://huggingface.co/pricing#endpoints Spaces: https://huggingface.co/pricing#spaces AutoTrain: https://huggingface.co/pricing#autotrain
Practices (1)

Sorted by: A-Z

solution validation level iconAdvanced

Generative AI Services Competency

Expert Acceleration Program
Get guidance and support from our machine learning experts. We’ve put together a world-class team to help customers build better GenAI solutions, faster. Select any of the 470,000+ models publicly available on the Hugging Face Hub. With the Hugging Face containers, you can train models easily, skipping the complicated process of building and optimizing your training environments from scratch. Training, fine-tuning and deploying with Amazon SageMaker enables production-ready endpoints that scale seamlessly, with built-in monitoring and enterprise-level security.
Case Studies (3)

Sorted by: A-Z

Consulting Services | Generative AI
A Song is more than just the Lyrics
Musixmatch is a platform for users to search and share song lyrics with translations. Hugging Face helped Musixmatch perform all the typical NLP tasks on lyrics, from Named Entity Recognition, to part-of-speech tagging and custom classification.
Consulting Services | Generative AI
Powering Healthcare Navigation with Hugging Face & AWS
Quantum Health runs all agent calls through Amazon Transcribe for speech-to-text. Using that output text, they were able to train a Hugging Face model on SageMaker which summarizes it. This model was then deployed to a SageMaker endpoint for batch inference on new calls to create agent “notes”.
Machine Learning | Machine Learning Operations | Platform Solutions | SaaS and API Solutions
Using Hugging Face Models on SageMaker
Prophia was able to deploy a pre-trained RoBERTa model to perform question answering as well as a T5 model for extractive summarization in less than 5 minutes.
Hugging Face Inc. Customer References (3)
Locations (1)
Headquarters

Paris

9 rue des Colonnes, 75002 Paris, France.

Paris, France, 75002, France