AWS HPC Blog

Tag: AWS ParallelCluster

Large scale training with NeMo Megatron on AWS ParallelCluster using P5 instances

Large scale training with NVIDIA NeMo Megatron on AWS ParallelCluster using P5 instances

Launching distributed GPT training? See how AWS ParallelCluster sets up a fast shared filesystem, SSH keys, host files, and more between nodes. Our guide has the details for creating a Slurm-managed cluster to train NeMo Megatron at scale.

Best practices for running molecular dynamics simulations on AWS Graviton3E

Best practices for running molecular dynamics simulations on AWS Graviton3E

If you run molecular dynamics simulations, you need to read this. We walk through running benchmarks of popular apps like GROMACS and LAMMPS on new Hpc7g instances and Graviton3E processors. The results – up to 35% better vector performance versus Graviton3! Learn how to optimize your own workflows.

Optimizing MPI application performance on hpc7a by effectively using both EFA devices

Optimizing MPI application performance on hpc7a by effectively using both EFA devices

Get the inside scoop on optimizing your MPI apps and configuration for AWS’s powerful new Hpc7a instances. Dual rail gives these instances huge networking potential @ 300 Gb/s – if properly used. This post provides benchmarks, sample configs, and real speedup numbers to help you maximize network performance. Whether you run weather simulations, CFD, or other HPC workloads, you’ll find practical tips for your codes.