AWS Batch Documentation
With AWS Batch, you are enabled to package the code for your batch jobs, specify their dependencies, and submit your batch job using the AWS Management Console, CLIs, or SDKs. AWS Batch enables you to specify execution parameters and job dependencies and is designed to facilitate integration with a range of batch computing workflow engines and languages.
Multi-container jobs
Multi-container jobs is a feature that helps you run simulations when testing complex systems by enabling you to run multiple containers in a job.
Multi-node parallel jobs
AWS Batch also supports multi-node parallel jobs, so you are enabled to run single jobs that span multiple EC2 instances.
Job definitions and job dependency modeling
With AWS Batch, you are enabled to specify resource requirements to define how jobs are run. AWS Batch is designed to run your jobs as containerized applications. You are also enabled to define dependencies between different jobs.
Integrations
Support for workflow engines
AWS Batch is designed to be integrated with commercial and open-source workflow engines.
Integrated monitoring and logging
AWS Batch is designed to display operational metrics for your batch jobs in the AWS Management Console. You are enabled to view metrics related to compute capacity, as well as metrics for running, pending, and completed jobs. Logs for your jobs are designed to be available in the console.
Scheduling
Job scheduling
AWS Batch is designed to set up multiple queues with different priority levels. AWS Batch jobs are designed to be stored in the queues until compute resources are available to execute the job. The AWS Batch scheduler is designed to evaluate when, where, and how to run jobs that have been submitted to a queue based on the resource requirements of each job. The scheduler evaluates the priority of each queue and runs jobs in priority order on optimal compute resources (e.g., memory vs CPU optimized), as long as those jobs have no outstanding dependencies.
Support for GPU scheduling
GPU scheduling is designed to allow you to specify the number and type of accelerators your jobs require as job definition input variables in AWS Batch. AWS Batch is designed to help scale up instances appropriate for your jobs based on the required number of GPUs and isolate the accelerators according to each job’s needs.
Allocation strategies
AWS Batch enables customers to choose how to allocate compute resources.
Additional Information
For additional information about service controls, security features and functionalities, including, as applicable, information about storing, retrieving, modifying, restricting, and deleting data, please see https://docs.aws.amazon.com/index.html. This additional information does not form part of the Documentation for purposes of the AWS Customer Agreement available at http://aws.amazon.com/agreement, or other agreement between you and AWS governing your use of AWS’s services.