AWS HPC Blog

Tag: ML

Large scale training with NeMo Megatron on AWS ParallelCluster using P5 instances

Large scale training with NVIDIA NeMo Megatron on AWS ParallelCluster using P5 instances

Launching distributed GPT training? See how AWS ParallelCluster sets up a fast shared filesystem, SSH keys, host files, and more between nodes. Our guide has the details for creating a Slurm-managed cluster to train NeMo Megatron at scale.

Accelerate Drug Discovery with NVIDIA BioNeMo Framework on Amazon EKS

Accelerate drug discovery with NVIDIA BioNeMo Framework on Amazon EKS

This post was contributed by Doruk Ozturk and Ankur Srivastava at AWS, and Neel Patel at NVIDIA. Introduction Drug discovery is a long and expensive process. Pharmaceutical companies must sift through thousands of compound possibilities to find potential new drugs to treat diseases. This process takes multiple years and costs billions of dollars, with the […]

Optimizing MPI application performance on hpc7a by effectively using both EFA devices

Optimizing MPI application performance on hpc7a by effectively using both EFA devices

Get the inside scoop on optimizing your MPI apps and configuration for AWS’s powerful new Hpc7a instances. Dual rail gives these instances huge networking potential @ 300 Gb/s – if properly used. This post provides benchmarks, sample configs, and real speedup numbers to help you maximize network performance. Whether you run weather simulations, CFD, or other HPC workloads, you’ll find practical tips for your codes.

Protein language model training with NVIDIA BioNeMo framework on AWS ParallelCluster

Protein language model training with NVIDIA BioNeMo framework on AWS ParallelCluster

In this new post, we discuss pre-training ESM-1nv for protein language modeling with NVIDIA BioNeMo on AWS. Learn how you can efficiently deploy and customize generative models like ESM-1nv on GPU clusters with ParallelCluster. Whether you’re studying protein sequences, predicting properties, or discovering new therapeutics, this post has tips to accelerate your protein AI workloads on the cloud.

Enhancing ML workflows with AWS ParallelCluster and Amazon EC2 Capacity Blocks for ML

Enhancing ML workflows with AWS ParallelCluster and Amazon EC2 Capacity Blocks for ML

No more guessing if GPU capacity will be available when you launch ML jobs! EC2 Capacity Blocks for ML let you lock in GPU reservations so you can start tasks on time. Learn how to integrate Caacity Blocks into AWS ParallelCluster to optimize your workflow in our latest technical blog post.