AWS HPC Blog

Category: AWS ParallelCluster

44-Qubit quantum circuits simulated using AWS ParallelCluster 

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

A key part of the development of quantum hardware and quantum algorithms is simulation using existing classical architectures and HPC techniques. In this blog post, we describe how to perform large-scale quantum circuits simulations using AWS ParallelCluster with QuEST, the Quantum Exact Simulation Toolkit. We demonstrate a simple and rapid deployment of computational resources up to 4,096 compute instances to simulate random quantum circuits with up to 44 qubits. We were able to allocate as many as 4096 EC2 instances of c5.18xlarge to simulate a non-trivial 44 qubit quantum circuit in fewer than 3.5 hours.

Running large-scale CFD fire simulations on AWS for Amazon.com

In this blog post, we discuss the AWS solution that Amazon’s construction division used to conduct large-scale CFD fire simulations as part of their Fire Strategy solutions to demonstrate safety and fire mitigation strategies. We outline the five key steps taken that resulted in simulation times that were 15-20x faster than previous on-premises architectures, reducing the time to complete from up to twenty-one days to less than one day.

Expanded filesystems support in AWS ParallelCluster 3.2

Expanded filesystems support in AWS ParallelCluster 3.2

AWS ParallelCluster version 3.2 introduces support for two new Amazon FSx filesystem types (NetApp ONTAP and OpenZFS). It also lifts the limit on the number of filesystem mounts you can have on your cluster. We’ll show you how, and help you with the details for getting this going right away.

Running cost-effective GROMACS simulations using Amazon EC2 Spot Instances with AWS ParallelCluster

In this blog post, we cover how to run GROMACS – a popular open source designed for simulations of proteins, lipids, and nucleic acids – cost effectively by leveraging EC2 Spot Instances within AWS ParallelCluster. We also show how to checkpoint GROMACS to recover gracefully from possible Spot Instance interruptions.

Introducing the Spack Rolling Binary Cache hosted on AWS

Today we’re excited to announce the availability of a new public Spack Binary Cache. In a collaboration, between AWS, E4S, Kitware, and the Lawrence Livermore National Laboratory (LLNL), Spack users now have access to a public build cache hosted on Amazon S3. The use of this Binary Cache will result in up to 20x faster install times for common Spack packages.

Migrating to AWS ParallelCluster v3 – Updated CLI interactions

The AWS ParallelCluster version 3 CLI differs significantly from ParallelCluster version 2. This post provides some guidance on mapping between versions to help you with migrating to ParallelCluster 3. We also summarize new CLI features in ParallelCluster 3 to expose the things you just couldn’t do previously.

Choosing between AWS Batch or AWS ParallelCluster for your HPC Workloads

It’s an understatement that AWS has a lot of services (more than 200 at the time of this post!). We’re usually the first to point out that there’s more than one way to solve a problem. HPC is no different in this regard, because we offer a choice: customers can run their HPC workloads using AWS […]