AWS Batch now supports GPU scheduling for accelerating batch jobs

Posted on: Apr 4, 2019

AWS customers can now seamlessly accelerate their High Performance Computing (HPC), machine-learning, and other batch jobs through AWS Batch simply by specifying the number of GPUs each job requires. Starting today, you can use AWS Batch to specify the number and type of accelerators your jobs require as job definition input variables, alongside the current options of vCPU and memory. AWS Batch will scale up instances appropriate for your jobs based on the required number of GPUs and isolate the accelerators according to each job’s needs, so only the appropriate containers can access them.

Hardware-based compute accelerators such as Graphics Processing Units (GPUs) enable users to increase application throughput and decrease latency with purpose-built hardware. Until now, AWS Batch users wanting to take advantage of accelerators needed to build a custom AMI and install the appropriate drivers, and have AWS Batch scale GPU accelerated EC2 P-type instances based on their vCPU and memory characteristics. Now, customers can simply specify the desired number and type of GPUs, similar to how they can specify vCPU and memory, and Batch will launch the EC2 P-type instances needed to run the jobs. Additionally, Batch isolates the GPU to the container, so each container gets the appropriate amount of resources it needs.

Learn more about GPU support on AWS Batch here.