AWS HPC Blog

Category: High Performance Computing

Cost-optimization on Spot Instances using checkpoint for Ansys LS-DYNA

A major portion of the costs incurred for running Finite Element Analyses (FEA) workloads on AWS comes from the usage of Amazon EC2 instances. Amazon EC2 Spot Instances offer a cost-effective architectural choice, allowing you to take advantage of unused EC2 capacity for up to a 90% discount compared to On-Demand Instance prices. In this post, we describe how you 0can run fault-tolerant FEA workloads on Spot Instances using Ansys LS-DYNA’s checkpointing and auto-restart utility.

Quantum Chemistry Calculation with FHI-aims code on AWS

This article was contributed by Dr. Fabio Baruffa, Sr. HPC and QC Solutions Architect at AWS, and Dr. Jesús Pérez Ríos, Group Leader at the Fritz Haber Institute, Max-Planck Society.   Introduction Quantum chemistry – the study of the inherently quantum interactions between atoms forming part of molecules – is a cornerstone of modern chemistry. […]

Virtual Screening of Novel Active Drug Compounds on AWS with Orion®

Computer-aided drug discovery (CADD) has been a key player in lowering the cost and speeding up the timeline for drug development. CADD uses high performance computing (HPC) resources to virtually screen databases with billions of molecules. It can speed up the searching of potential drug molecules, and filter out molecules and compounds that are unsuitable. OpenEye Scientific developed Orion®, a cloud-based molecular design platform for CADD. Orion provides computational chemists with virtually unlimited HPC resources. These include data visualization, collaboration, and workflow management tools that help them perform calculations more efficiently. In this post, we describe the Orion architecture on AWS, and it’s capabilities to address the challenges in drug development.

New: Introducing AWS ParallelCluster 3

Running HPC workloads, like computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involves a lot of moving parts. You need a hundreds or thousands of compute cores, a job scheduler for keeping them fed, a shared file system that’s tuned for throughput or IOPS (or both), loads of libraries, a fast network, and […]

Supporting climate model simulations to accelerate climate science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). In collaboration with ASDI, AWS, and SilverLining, a nonprofit dedicated to ensuring a safe climate, the National Center for Atmospheric Research (NCAR) will run an ensemble of 30 climate-model simulations on AWS. The climate runs will simulate the Earth system over the period of years 2022-2070 under a median scenario for warming and make them available through the AWS Open Data Program. The simulation work will demonstrate the ability to use cloud infrastructure to advance climate models in support of robust scientific studies by researchers around the world and aims to accelerate and democratize climate science.

High Burst CPU Compute for Monte Carlo Simulations on AWS

Playtech mathematicians and game designers need accurate, detailed game play simulation results to create fun experiences for players. While software developers have been able to iterate on code in an agile manner for many years, for non-analytical solutions, mathematicians have had to rely on slow CPU-bound Monte-Carlo simulations, waiting, as software engineers once did, many hours or overnight to get the results of their latest changes. These statistics are also required as evidence of game fairness in the highly regulated online gaming business. Playtech has developed an AWS Lambda Serverless based solution that provides massive burst compute performance that allows game simulations in minutes rather than hours. This post goes into the details of the architecture, as well as some examples of using the system in our development and operations.

Stion – a Software as a Service for Cryo-EM data processing on AWS

This post was written by Swapnil Bhatkar, Cloud Engineer, NREL in collaboration with Edward Eng Ph.D. and Micah Rapp Ph.D, both SEMC/NYSBC, and Evan Bollig Ph.D. and Aniket Deshpande, both AWS. Introduction Cryo-electron microscopy (Cryo-EM) technology allows biomedical researchers to image frozen biological molecules, such as proteins, viruses and nucleic acids, and obtain structures of […]

Price-Performance Analysis of Amazon EC2 GPU Instance Types using NVIDIA’s GPU optimized seismic code

Seismic imaging is the process of positioning the Earth’s subsurface reflectors. It transforms the seismic data recorded in time at the Earth’s surface to an image of the Earth’s subsurface. This is done by back-propagating data from time to space in a given velocity model. Kirchhoff depth migration is a well-known technique used in geophysics for seismic imaging. Kirchhoff time and depth migration produce an image with higher resolution and generate an image of the subsurface for a subset class of the data, providing valuable information about the petrophysical properties of the rocks and helps to determine how accurate the velocity model is. This blog post looks at the price-performance characteristics computing Kirchhoff migration methods on GPUs using Nvidia’s GPU-optimized code.

Bare metal performance with the AWS Nitro System

High Performance Computing (HPC) is known as a domain where applications are well-optimized to get the highest performance possible on a platform. Unsurprisingly, a common question when moving a workload to AWS is what performance difference there may be from an existing on-premises “bare metal” platform. This blog will show the performance differential between “bare metal” instances and instances that use the AWS Nitro hypervisor is negligible for the evaluated HPC workloads.