Posted On: Apr 13, 2021
AWS Batch has increased a variety of performance characteristics, including job scheduling speed for EC2 Compute Environments, EC2 instance scaling reactivity, and most AWS Batch APIs.
AWS Batch is a cloud-native batch scheduler that enables anyone - from enterprises, to scientists and developers - to easily and efficiently run batch jobs on AWS. Whether you have a few jobs or hundreds of thousands, AWS Batch dynamically provisions the optimal quantity and type of compute resources based on the volume and specific resource requirements of the work you submit. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across AWS compute services and features, such as AWS Fargate, Amazon EC2, and Spot Instances.
AWS Batch is now up to 5x faster when scaling a managed EC2 Compute Environment. This means that when Batch needs to either scale EC2 instances in response to new jobs, or scale down instances once jobs are complete, Batch makes the decision much more quickly. Faster decisions mean increased throughput and reactivity on scale-up, and reduced costs on scale-down. Additionally, AWS Batch is up to 2x faster on dispatch rate of jobs from an AWS Batch Job Queue to an EC2 or EC2 Spot Compute Environment, and 1.7x higher limits on a variety of AWS Batch API’s, including SubmitJob and TerminateJob. With these improvements, customers can submit more work to Batch, which will handle jobs with speed, scale, and responsiveness.