Posted On: Feb 4, 2019
Today Amazon Elastic Container Service (ECS) announced enhanced support for running machine learning and high performance computing applications on EC2 GPU instances. ECS task definitions now allow you to designate a number of GPUs to assign to particular containers, which ECS will pin accordingly for workload isolation and optimal performance.
Previously to leverage GPUs on ECS, you had to bring your own custom-configured AMI and use custom vCPU placement logic as a proxy for attempting to assign physical GPUs to specific containers. Additionally, you could not perform any pinning or isolation. Now, you may use the ECS GPU Optimized AMI with p2 and p3 instances which comes ready with pre-configured Nvidia kernel drivers, an appropriate Docker GPU runtime, and a default version of CUDA. Task definitions now allow you to designate a number of GPUs to assign to a specific container which ECS will use as a scheduling mechanism. As your containers are placed on these instances, ECS will pin physical GPUs to desired containers for workload isolation and optimal performance.
To learn more about it, read our blog or check out our documentation. To see where ECS is available, please check out the the region table.