Tag: AWS Batch
Since launch, EFA has seen continuous improvements in performance. In this post, we talk about our 2nd generation of EFA, which takes another step in improving Machine Learning and High Performance Computing in the Cloud.
Pay-as-you-go resources are a compelling but budget-limited researchers performing HPC workloads need help working within the bounds of their grants. In this post, we show how to build a real-time cost guardian for AWS Batch to help enforce those limits.
Today, we discuss AWS batch on Amazon EKS, and the initial motivation and design choices the team made when we developed the service, and some of the challenges to overcome.
Today we are excited to announce that all 9000+ applications provided by the BioContainers community are available within ECR Public Gallery! You don’t need an AWS account to access these images, but having one allows many more pulls to the internet, and unmetered usage within AWS. If you perform any sort of bioinformatics analysis on AWS, you should check it out!
In this blog post, we demonstrate how to leverage the AWS Genomics Command line and Amazon SageMaker to analyze large-scale exome sequences and derive meaningful insights. We use the bioinformatics workflow manager Nextflow, it’s open source library of pipelines, NF-Core, and AWS Batch.
In this blog post, we’ll show how you can run NVIDIA Parabricks on AWS Batch leveraging AWS CloudFormation templates. Parabricks is a GPU-accelerated tool for secondary genomic analysis. It reduces the runtime of variant calling on a 30x human genome from 30 hours to just 30 minutes, and leverages AWS Batch to provide an interface that scales compute jobs across multiple instances in the cloud.
Batch processing is a common need across varied machine learning use cases such as video production, financial modeling, drug discovery, or genomic research. The elasticity of the cloud provides efficient ways to scale and simplify batch processing workloads while cutting costs. In this post, you’ll learn a scalable and cost-effective approach to configure AWS Batch Array jobs to process datasets that are stored on Amazon S3 and presented to compute instances with Amazon FSx for Lustre.
A customer asked us what is the difference between the CancelJob and TerminateJob API calls in AWS Batch. This post provides an overview of AWS Batch job states, and how these two API calls effect the job requests that you have submitted.
Large-scale data analysis usually involves some multi-step process where the output of one job acts as the input of subsequent jobs. Customers using AWS Batch for data analysis want a simple and performant storage solution to share with and between jobs. We are excited to announce that customers can now use Amazon Elastic File System (Amazon […]
This post is written by Deepak Singh, Vice President of Compute Services. At AWS, we love working with customers to solve their toughest challenges. High performance computing (HPC) is one of those challenges that pushes against the boundaries of AWS performance at scale. HPC is also a personal interest of mine, as I came to […]