AWS HPC Blog

Category: Technical How-to

Scalable and Cost-Effective Batch Processing for ML workloads with AWS Batch and Amazon FSx

Batch processing is a common need across varied machine learning use cases such as video production, financial modeling, drug discovery, or genomic research. The elasticity of the cloud provides efficient ways to scale and simplify batch processing workloads while cutting costs. In this post, you’ll learn a scalable and cost-effective approach to configure AWS Batch Array jobs to process datasets that are stored on Amazon S3 and presented to compute instances with Amazon FSx for Lustre.

A VDI solution with EnginFrame and NICE DCV Session Manager built with AWS CDK

This post was written by Dario La Porta, AWS Professional Services Senior Consultant for HPC. Customers across a wide range of industries such as energy, life sciences, and design and engineering are facing challenges in managing and analyzing their data. Not only is the amount and velocity of data increasing, but so is the complexity and […]

AWS batch loves Amazon EFS - header image

Introducing support for per-job Amazon EFS volumes in AWS Batch

Large-scale data analysis usually involves some multi-step process where the output of one job acts as the input of subsequent jobs. Customers using AWS Batch for data analysis want a simple and performant storage solution to share with and between jobs. We are excited to announce that customers can now use Amazon Elastic File System (Amazon […]

Simplify HPC cluster usage with AWS Cloud9 and AWS ParallelCluster

This post was written by Benjamin Meyer, AWS Solutions Architect When companies and labs start their high performance computing (HPC) journey in the AWS Cloud, it’s not only because they’re in search of elasticity and scale – they’re also in search of new tools and environments. Initially this can appear challenging as there are many […]