AWS for M&E Blog
Virtual prototyping with Autodesk VRED on AWS
Figure 1: Autodesk VRED provides high-end rendering and streaming of complex digital assets
The VRED set of software tools from Autodesk let designers create and present high-quality product renderings of complex digital assets, such as automotive vehicles and other engineering-related artefacts. VRED has traditionally used an array of hardware to increase productivity, including multiple GPU and CPU configurations both locally and remotely, making low-latency and high-bandwidth networks and file systems necessary. Typically, this has required a substantial capital investment in hardware, leading to renewal cycles that are typically much longer than advances in technology for GPU and CPU architectures.
Leveraging a wide range of Amazon Elastic Compute Cloud (Amazon EC2) instance types and related services from Amazon Web Services (AWS), studios can now create workstations and cluster render nodes, with underlying performant networking, for a range of design and visualization workloads. This allows studios to capitalize on additional benefits:
- Costly up-front capital investment in hardware can be replaced with an OpEx model, using the latest hardware at a moment’s notice, on demand, from AWS.
- Infrastructure such as workstations and access to compute clusters for rendering can dynamically scale in line with design team sizes, enabling resources for teams to achieve their deadlines.
- Teams can adopt remote workflows with relative ease.
Underlying technology
The fifth generation graphics instance, Amazon EC2 G5, boasts an impressive arsenal of hardware features for GPU shading-unit based parallel processing, NVIDIA OptiX workloads, and machine learning – all of which are used by VRED to create fast, accurate, ray-traced imagery.
CUDA Cores / Shading Units | 10,240 |
Ray Tracing Cores | 80 |
Tensor Cores | 320 |
Memory | 24 GiB |
The G5 instance comes in a variety of GPU, CPU, and memory sizes. These can be used to provide single, quad, or octet GPU-based workstations running either VRED Pro or a larger pool of machines hosting VRED Core, allowing designers to increase their render and streaming capabilities via clustering. With Amazon EC2 instances, machines can be stopped, reshaped into different sizes, and resumed, allowing the customization of the underlying instance to suit the task immediately at hand.
Instance Size | GPUCount | vCPUs | Memory (GiB) | Network Bandwith (Gbps) |
G5.4xl | 1 | 16 | 64 | Up to 25 |
G5.8xl | 1 | 32 | 128 | 25 |
G5.12xl | 4 | 48 | 192 | 40 |
G5.24xl | 4 | 96 | 384 | 50 |
G5.48xl | 8 | 192 | 768 | 100 |
As an alternative, CPU-based cluster fleets can use VRED Render on an array of cost-effective Amazon EC2 compute-based instance classes and sizes, such as the M6a – using the AMD 3rd generation EPYC chipset, with up to 192 vCPUs.
Architecture
For single node installations, Amazon EC2 G5 instances can run VRED Pro on Windows Server to function as an artist workstation delivering high-quality graphics and rendering. Performant remote display protocols such as NICE DCV from AWS or HP Anyware (formerly Teradici PCoIP) can stream applications to any device, over varying network conditions and in a secure manner. With these remote display protocols, customers can access graphics-intensive applications such as VRED Pro remotely from simple client machines such as standard laptops or small form factor thin-clients such as Intel NUCs, eliminating the need for expensive dedicated workstations. This also offers the ability to work wherever a suitable network permits (we recommend a 20 Mb/s internet connection for dual 4k monitors).
To quickly load and transport VRED scene files and supporting data, performant file systems such as Amazon FSx for Windows File Server with fast underlying SSD storage can be mounted on workstation instances. This allows artists to easily share projects and render using offline machines for seamless collaboration. To achieve a rich design experience, using input devices such as Wacom Tablets in a lag-free manner, we recommend a latency of 25ms or less between the artist workstation instance and the end user client; with AWS you can accomplish this by creating your instances in one of the many global Regions or Local Zones that are closest to your designers.
The following diagram depicts a simple VRED Pro workstation setup on AWS:
Figure 2: Architectural diagram showing multiple workstations running VRED Pro on AWS
Cluster workflows
Additional CPU or GPU-based instances can be formed into clusters within VRED environments, to allow the distribution of render tasks away from a single machine. This can be used to accelerate the rendering of images, or to increase the performance of a real-time streaming session. A VRED cluster consists of a main node (e.g. VRED Pro or Core) and multiple cluster render nodes, which are connected using a low latency network. Cluster nodes can be used elastically to augment a workstation as and when needed to bolster performance.
In the following diagram, the artist workstations / main nodes are deployed using a Windows G5 instance as previously discussed running VRED Pro on a Windows Server operating system. The VRED Render node cluster is built using G5 instances running a Linux operating system, with VRED Core installed to allow CPU and GPU rendering. The same render node cluster can be shared among multiple artists if the cluster has enough resources available to support the aggregate workload across the multiple main instances. Alternatively, multiple cluster node fleets can be created to scale to requirements. For workflows that require low latency and the highest bandwidth, the cluster components can be placed within an AWS Cluster Placement Group. This increased proximity enables higher per-flow throughput, both for scene and pixel data streaming, increasing frames per second.
Figure 3: Architectural diagram incorporating additional clusters of VRED Core
Configuring the cluster
In order for the render nodes to successfully connect and communicate with the main node, their IPs need to be added to the cluster settings within the VRED Pro UI. This is easily achieved programmatically with the following steps:
- Add the AWS SDK for Python module (boto3) to the relevant VRED Pro install, so that VRED can interact with AWS services such as EC2. This can be done from a Windows Command Prompt as Administrator:
Note: ensure that you add the boto3 module to the appropriate VRED install location.
We use the ‘requests’ HTTP library to access the Instance Metadata Service to query (and automate) the determination of which region the cluster machines are running in; this will also need installing in the same console:
- Authenticate your credentials to your AWS account – this is done by following the Boto3 configuration
- From within the Script Editor inside VRED Pro, use this simple script to query running Amazon EC2 Instances serving as cluster machines, and add their private IP addresses automatically to the VRED Cluster.