In many HPC workloads, achieving the best end-to-end performance of an application or workflow depends on choosing the right technology to use to host your files during processing, or configuring your network stack to perform optimally for MPI or other communication protocols. This module covers which options AWS offers in these areas, and guides you on the price/performance considerations that can help you choosing the right solutions for each workload

Topics covered:

  • Storage on AWS for HPC
  • Network scalability for HPC workloads

Overview of storage options on AWS: AWS provides many options for storage, ranging from high performance object storage, to many types of file systems that can be attached to an EC2 instance. Besides sheer performance, multiple dimensions differentiate these storage types, including cost and scalability. The following table can give you some orientation to find the right storage for each of your HPC data:

HPC_storage2

Shared File Systems for HPC: Shared storage is achieved in many ways, for example with a simple NFS mount of an EBS volume, with Intel Lustre assembled from EBS volumes, and with the managed AWS service called EFS. As with instance type, it is easy to test storage options to find the most performant file system type.

Instance-attached storage: EBS volumes also come in a variety of options ranging from a high IOPS volume, general purpose, and magnetic options. Many HPC applications run very well on the less expensive general purpose and magnetic volume EBS storage types. As with instance selection, EBS volume selection is easy to test allowing for an optimized solution.

Lab Storage configuration: The storage configuration options used in the default EnginFrame automation is described as follows:

  • The integration scripts mount an EFS volume under /efs on the master and compute nodes – this file systems contains a directory for apps and a spooler directory that by default hosts a separate submission directory for each of your jobs
  • AWS ParallelCluster also provides an EBS gp2 volume which is attached to the master and NFS mounted to compute nodes as /shared
  • The /home directory from the master instance is also NFS mounted to the compute nodes. Being installed on the same filesystem of the OS, it’s not recommended to be used for persistent storage

The performance of these shared file systems can vary substantially from one workload to another. In order to understand which one works best for you, the best approach is to benchmark the same case on both /efs (which is configured as the default location in EnginFrame) and /shared.


Current AWS Networking: AWS currently supports Enhanced Networking capabilities using SR-IOV (Single Root I/O Virtualization). SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization compared to traditional implementations. For supported Amazon EC2 instances, this feature provides higher packet per second (PPS) performance, lower inter-instance latencies, and very low network jitter, and has been tested to perform well both for High Throughput Computing (HTC), “embarrassingly parallel” applications, as well as “Tightly-Coupled”, or MPI and OpenMP-based HPC applications. 

HPC_LearningPath-Networking

The network speed depends from the instance type and size, for example r4.16xlarge provides 20Gigabit connectivity between instances, when using the same placement group (a logical grouping of instances), and enhanced networking.

Lab networking configuration: By default, the lab creates a new placement group and requires that all the compute nodes of the cluster get launched in it. This gives you the lowest latency and highest bandwidth between your nodes, particularly relevant if you run MPI applications. If you have an HTC problem which scales horizontally to 10s of thousands of cores or more (which is beyond the scope of this lab), you should consider running them on multiple placement groups to give EC2 more flexibility on where to allocate this large number of nodes. You can disable the use of a fixed placement group by setting the following parameter in the AWS ParallelCluster configuration:

placement_group = NONE

Tip: If you need to scale your cluster to a very large number of nodes, or you have high performance storage requirements, it is a good idea to talk to your Technical Account Manager or to an HPC Solution Architect, who can review your target architecture, help identifying potential bottlenecks and choose the right technologies for your specific goals.