AWS Open Source Blog

Remote visualization in HPC using NICE DCV with ParallelCluster

NICE DCV is an AWS-owned high performance remote display protocol, which specializes in 2D/3D interactive streaming over the internet or a local network (e.g., WiFi). With the power of NICE DCV we can seamlessly connect to our remote session running either in the cloud or data center via internet from a local laptop. We can run demanding visualization or post processing and interact with it as if we were sitting at the machine itself. With NICE DCV we can stream the visualization to remote clients, eliminating the need for expensive dedicated workstations. NICE DCV streams pixels and not data to help provide customers data privacy. There is no additional charge to use NICE DCV on Amazon Elastic Compute Cloud (Amazon EC2), you only pay for the EC2 resources you use. For more details on NICE DCV, visit the NICE DCV website.

In this post, we explain how to run remote visualization in high performance computing (HPC) workloads using NICE DCV. We walk through how to set up NICE DCV with AWS ParallelCluster and run graphics-intensive applications and post-processing remotely on Amazon EC2 instances. AWS ParallelCluster is an AWS-supported open source cluster management tool that makes deploying and managing HPC clusters on AWS easy. This blog post provides a quick view into different HPC use cases across various industry verticals, such as oil and gas, manufacturing, life sciences, and media and entertainment.

Below are example HPC use cases that can take advantage of remote visualization using NICE DCV and can be configured using the method described later in the post.

Oil and gas

The following demo shows a seismic full-volume data of the Parihaka survey that is located towards the northern part of Taranaki Basin on the west coast of New Zealand. This seismic data is rendered using NVIDIA IndeX HTML5-based viewer that has been launched in a remote NICE DCV session. This remote NICE DCV session is hosted on the head node of the cluster, which, for example, can be an Amazon EC2 G4 instance as shown in the example parallel cluster configuration file in this post. We can connect to this remote session from a local laptop as NICE DCV streams this visualization from the head node.

With this demo we see real-time and responsive interaction with the 3D seismic model via NICE DCV. With the help of NVIDIA IndeX we can dissect the volume in the x,y,z directions to get a cross sectional view to help with better analysis (for example, to detect fault lines).

Molecular dynamics

Next we have a molecular dynamics (MD) use case. The molecular dynamics simulations have been run on a membrane protein on a cluster, on AWS. The head node of the cluster is an Amazon EC2 G4 instance similar to the AWS ParallelCluster configuration described later in this post. This head node hosts a NICE DCV remote desktop session, and we have connected via a browser to this session from a local laptop. Here we have used Visual Molecular Dynamics (VMD) software to visualize the results, and NICE DCV streams this from the head node of the cluster.

One of the goals of the simulation was to introduce disorder of a fluid like bi-layer into the protein patch. The visualization below shows how as the frames are loaded the patch goes from being ordered to somewhat disordered, and we also notice interaction with the model as the frames are being loaded without any time lag. Also notice that the resolution is good as we zoom in to the model.

Computational Fluid Dynamics (CFD)

This is an example of how NICE DCV can be utilized in a CFD use case. What we see below is the visualization of the solution to the popular motorbike geometry, which is a part of the OpenFOAM tutorial and is the 4M cell test case. The test was run on an HPC cluster configured using AWS ParallelCluster on AWS and post processing files were created.

Similar to the above two use cases, a NICE DCV remote desktop session has been hosted on the head node of the cluster and is connected to via a browser from a local laptop. The results are visualized using the ParaView application. The demo shows interaction with the model and selection of the velocity variable to show stream-wise velocity of the air flowing over the motorbike. The blue represents the slower flow, whereas the red represents the faster. The interaction via NICE DCV is smooth and with no noticeable latency.

Media and entertainment

This example shows how NICE DCV can be used in media and entertainment and shows a video game streaming experience. This is UNIGINE heaven benchmark and is running on an Amazon EC2 G4 instance, which hosts the remote NICE DCV session, and, like in the previous use cases, we have connected to it via a browser from a local laptop.

The colors are vibrant and it provides a real-time experience while playing the game. The interaction is fluid, and it does not feel like the game is running on a remote instance and is being streamed by NICE DCV over the network to a local machine.

Getting started with NICE DCV

Now that we have seen how NICE DCV can be used in different HPC use cases, let’s get started with NICE DCV. There are different ways to set up NICE DCV on Amazon EC2 instances. For example, by using the NICE DCV public AMI, by using a predefined AWS CloudFormation template, or by a manual installation of the NICE DCV server and client. Here we show how to get started with NICE DCV in HPC environments using AWS ParallelCluster.

AWS ParallelCluster is an AWS-supported open source cluster management tool that helps to deploy and manage HPC clusters in the AWS Cloud. With AWS ParallelCluster, the NICE DCV software is automatically installed on the head node. (For supported operating systems, visit the AWS ParallelCluster website.)

Following is an example AWS ParallelCluster configuration file to build an HPC cluster in AWS; the settings in bold indicate how we can modify the configuration file to enable NICE DCV on the head node of the cluster. Note that example head node here in the configuration file is a GPU-based Amazon EC2 G4 instance. G4 instances provide a cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud. The visualization examples shown previously were run on this configuration. For more details on Amazon EC2 instance types, visit the website.

cluster_template = hpc
update_check = true
sanity_check = true

aws_region_name = us-east-1

ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}

[vpc public-private]
vpc_id = [vpc-xxxx]
master_subnet_id = [subnet-xxxx]
compute_subnet_id = [subnet-xxxx]

[cluster hpc]
key_name = [your-keyname]
base_os = ubuntu1804
scheduler = slurm
master_instance_type = g4dn.xlarge
vpc_settings = public-private
fsx_settings = fsx-scratch2
dcv_settings = dcv
queue_settings = compute

[queue compute]
enable_efa = true
placement_group = DYNAMIC
compute_resource_settings = default

[compute_resource default]
instance_type = c5n.18xlarge
max_count = 64

[fsx fsx-scratch2]
shared_dir = /lustre
fsx_fs_id = fs-xxxxxxx

[dcv dcv]
enable = master
port = 8443
access_from =

Once we have the configuration file with custom settings as per our workload requirements, we can run the command pcluster create clustername -c config_file to create the cluster. For more details on using AWS ParallelCluster, configuration settings, and other options to customize your cluster, refer to the AWS ParallelCluster documentation.

To interact with the NICE DCV server running on the head node, we can use the command pcluster dcv. For example, we can run the command pcluster dcv connect from our local laptop and it will open the default browser to connect to the NICE DCV session running on the head node. This way we can remotely connect to the head node via NICE DCV to launch jobs, as well as for visualization or post-processing of the results.


In this post, we have shown how you can deploy NICE DCV along with AWS ParallelCluster to run interactive HPC workloads. For support-related questions or pricing on NICE DCV, visit the NICE Support portal.

Jyothi Venkatesh

Jyothi Venkatesh

Jyothi Venkatesh is an HPC Solutions Architect at AWS focused on building optimized solutions for HPC customers in different industry verticals including Health Care Life Sciences and Oil & Gas. Prior to joining AWS, she has spent close to 10 years in HPC both as a software engineer working on Parallel IO contributing to OpenMPI as well as a systems engineer at Dell leading the engineering development of Lustre Storage Solution for the HPC storage portfolio. She holds an M.S. in Computer Science from University of Houston.