AWS ParallelCluster 3.7 now supports adding login nodes to your cluster, out of the box. Here, we’ll show you how to set this up, and highlight some important tunable options for tweaking the experience.
In this post, we’ll walk you through how banks and other financial services firms migrate or burst their grid workloads onto AWS using AWS ParallelCluster and the Slurm scheduler.
Today we’re showing you our community library of HPC Recipes for AWS. It’s a public repo @github that will help you achieve feature-rich, reliable HPC deployments ready to run your workloads no matter where you’re starting from.
With AWS ParallelCluster 3.6, you can directly specify Slurm settings in the cluster config file – improving reproducibility and another step towards self-documentation for your HPC infrastructure.
In AWS ParallelCluster 3.4, you can now build HPC clusters that span multiple Amazon EC2 Availability Zones. In this post, we describe how the new feature works, how to use it, and some implications for cluster design that it raises.
Slurm accounting adds flexibility, transparency, and control to operating an #HPC cluster. #AWS #ParallelCluster 3.3.0 can now automatically configure #Slurm accounting whether you are using your own database or Amazon #Aurora.
AWS ParallelCluster 3.3.0 now lets you define a list of Amazon EC2 instance types for resourcing a compute queue. This gives you more flexibility to optimize the cost and total time to solution of your HPC jobs, especially when capacity is limited or you’re using Spot Instances.
AWS ParallelCluster version 3.2 now supports memory-aware scheduling in Slurm to give you control over the placement of jobs with specific memory requirements. In this blog post, we’ll show you how it works, and explain why this will be really useful to people with memory-hungry workloads.