AWS Machine Learning Blog
Join AWS at NVIDIA GTC 21, April 12–16
Starting Monday, April 12, 2021, the NVIDIA GPU Technology Conference (GTC) is offering online sessions for you to learn AWS best practices to accomplish your machine learning (ML), virtual workstations, high performance computing (HPC), and Internet of Things (IoT) goals faster and more easily.
Amazon Elastic Compute Cloud (Amazon EC2) instances powered by NVIDIA GPUs deliver the scalable performance needed for fast ML training, cost-effective ML inference, flexible remote virtual workstations, and powerful HPC computations. At the edge, you can use AWS IoT Greengrass and Amazon SageMaker Neo to extend a wide range of AWS Cloud services and ML inference to NVIDIA-based edge devices so the devices can act locally on the data they generate.
AWS is a Global Diamond Sponsor of the conference.
Available sessions
ML infrastructure:
- A Developer’s Guide to Choosing the Right GPUs for Deep Learning (Presented by Amazon Web Services, Inc.) [SS33025]
- A Developer’s Guide to Improving GPU Utilization and Reducing Deep Learning Costs (Presented by Amazon Web Services, Inc.) [SS33093]
- Analyzing Traffic Video Streams at Scale Using NVIDIA AI Software and NVIDIA A100-Powered AWS Instances [S32002]
- Unlocking the Power of AI in Latin America through Developer Communities [S32508]
ML with Amazon SageMaker:
- Model and Data Parallelism at Scale to Train Models with Billions of Parameters on Amazon SageMaker with NVIDIA GPUs [S31655]
- Achieve Best Inference Performance on NVIDIA GPUs by Combining TensorRT with TVM Compilation Using SageMaker Neo [S32214]
- 12x Reduction in Deep Learning Training Cost at Deepset by Using Accelerated Tensor Core-Powered GPU Instances on Amazon SageMaker [S31541]
- RAPIDS on AWS SageMaker: Scaling End-to-End Explainable Machine Learning Workflows [S31486]
ML deep dive:
- Advancing the State of the Art in AutoML, Now 10x Faster with NVIDIA RAPIDS [S31521]
- Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks [S31413]
- Automatically Build Machine Learning Models for Vision and Text with AutoGluon [S31667]
- Dive into Deep Learning: Code Side-by-Side with MXNet, PyTorch, and TensorFlow [S31692]
- Accelerate the Bridging of ML and DL with NVIDIA-Accelerated Apache MXNet 2.0 [S31746]
- Standardizing on an Array API for Python Across Deep Learning Frameworks [S31798]
- DGL-KE: Training Knowledge Graph Embeddings at Scale [S31490]
- Accelerate Drug Discovery with Multitask Graph Neural Networks [S31477]
- Deep Learning in Scala on Spark 3.0 with GPU on AWS [S32285]
High performance computing:
Internet of Things:
- Building Image and Video Inference Edge Applications with AWS Greengrass V2 on Jetson Devices [S31855]
- Video Analytics Pipeline Development from Edge to Cloud [S32143]
Edge computing with AWS Wavelength:
- Accelerating VR Adoption Using 5G Edge Computing [S31606]
- XR Streaming from 5G Mobile Edge Using AWS Wavelength and NVIDIA CloudXR SDK [S32031]
- Securing the Integrity of the CV2X Messages using Mobile Edge Compute (Presented by Amazon Web Services, Inc) [SS33228]
Automotive:
- Accelerating AV Development – Cloud based Innovation, Economics, and Efficiencies [S31722]
- How Renault Challenges Physical Mockups by Distributing Rendering on 4,000 GPUs [E31274]
Computer vision with AWS Panorama:
- Computer vision at the edge, with AWS Panorama (Presented by Amazon Web Services, Inc.) [SS33117]
- Lenovo’s ThinkEdge Portfolio Expansion Powered by NVIDIA Jetson (Presented by Lenovo) [SS33267]
Game tech:
- Next-Gen Game Development and Collaboration in the Cloud [S31650]
- 4K 60fps Cloud Gaming and Digital Content Creation Interactive Streaming with NICE DCV and Amazon EC2 G4dn Instances (Presented by Amazon Web Services, Inc.) [SS33013]
Visit AWS at NVIDIA GTC 21 for more details and register for free for access to this content during the week of April 12, 2021. See you there!
About the Author
Geoff Murase is a Senior Product Marketing Manager for AWS EC2 accelerated computing instances, helping customers meet their compute needs by providing access to hardware-based compute accelerators such as Graphics Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs). In his spare time, he enjoys playing basketball and biking with his family.