AWS HPC Blog
Tag: AWS ParallelCluster
Launch self-supervised training jobs in the cloud with AWS ParallelCluster
In this post we describe the process to launch large, self-supervised training jobs using AWS ParallelCluster and Facebook’s Vision Self-Supervised Learning (VISSL) library.
Support for Instance Allocation Flexibility in AWS ParallelCluster 3.3
AWS ParallelCluster 3.3.0 now lets you define a list of Amazon EC2 instance types for resourcing a compute queue. This gives you more flexibility to optimize the cost and total time to solution of your HPC jobs, especially when capacity is limited or you’re using Spot Instances.
Easing your migration from SGE to Slurm in AWS ParallelCluster 3
This post will help you understand the tools available to ease the stress of migrating your cluster (and your users) from SGE to Slurm, which is necessary since the HPC community is no longer supporting SGE’s open-source codebase.
Expanded filesystems support in AWS ParallelCluster 3.2
AWS ParallelCluster version 3.2 introduces support for two new Amazon FSx filesystem types (NetApp ONTAP and OpenZFS). It also lifts the limit on the number of filesystem mounts you can have on your cluster. We’ll show you how, and help you with the details for getting this going right away.
Slurm-based memory-aware scheduling in AWS ParallelCluster 3.2
AWS ParallelCluster version 3.2 now supports memory-aware scheduling in Slurm to give you control over the placement of jobs with specific memory requirements. In this blog post, we’ll show you how it works, and explain why this will be really useful to people with memory-hungry workloads.
How Thermo Fisher Scientific Accelerated Cryo-EM using AWS ParallelCluster
In this blog post, we’ll walk you through the process of building a successful Cryo-EM benchmarking pilot using AWS ParallelCluster, Amazon FSx for Lustre, and cryoSPARC (from Structura Biotechnology) and explain some of our design decisions along the way.
Building highly-available HPC infrastructure on AWS
In this blog post, we will explain how to launch highly available HPC clusters across an AWS Region. The solution is deployed using the AWS Cloud Developer Kit (AWS CDK), a software development framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation, hiding the complexity of integration between the components.
Numerical weather prediction on AWS Graviton2
The Weather Research and Forecasting (WRF) model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. With the release of Arm-based AWS Graviton2 Amazon Elastic Compute Cloud (EC2) instances, a common question has been how these instances perform on large-scale NWP workloads. In this blog, we will present results from a standard WRF benchmark simulation and compare across three different instance types.
GROMACS price-performance optimizations on AWS
Molecular dynamics (MD) is a simulation method for analyzing the movement and tracing trajectories of atoms and molecules where the dynamics of a system evolve over time. MD simulations are used across various domains such as material sciences, biochemistry, biophysics and are typically used in two broad ways to study a system. The importance of […]
Running finite element analysis using Simcenter Nastran on AWS
This post was written by Dnyanesh Digraskar, Sr. Partner Solutions Architect for HPC at AWS and co-authored by Wei Zhang and Ravi Gupta, Sr Software Engineers for Simcenter Nastran at Siemens. Introduction In this blog, we demonstrate the deployment, performance, and price comparisons of Simcenter Nastran for three finite element analysis (FEA) based use cases […]