AWS HPC Blog

Pushing pixels with NICE DCV

NICE DCV, our high-performance, low-latency remote-display protocol, was originally created for scientists and engineers who ran large workloads on far-away supercomputers, but needed to visualize data without moving it. When NICE DCV was born in 2007, gigabit Ethernet was what the cool kids had running between buildings (really cool kids had gigabit to the desktop). Off campus, pricey long-haul commercial links formed the connective tissue of the internet, but were still measured in tens of megabits per second. Domestic broadband connections in large cities were around 300 kilobits/s, and Netflix delivered movies on DVD though the US Postal Service.

This made visualization at a distance hard. Yet, we pursued it, because around half of the human brain is dedicated to interpreting visual information making your eyeballs easily the highest bandwidth, lowest latency input device your brain has. What you do with the information after that is for the psychologists (and poets) to ponder, but the existence of microchips, spacecraft and vaccines is evidence enough that the pursuit was worth it. And from necessity, came invention.

DCV was able to make very frugal use of very scarce bandwidth, because it was super lean, used data-compression techniques and quickly adopted cutting-edge technologies of the time from GPUs (this is HPC, after all, we left nothing on the table when it came to exploiting new gadgets). This allowed the team to create a super light-weight visualization package that could stream pixels over almost any network. So lean, in fact, that with reasonable bandwidth most users couldn’t tell that the data and the supercomputer were hundreds, or sometimes thousands of miles away. Nonetheless, physics limits how far apart the two can be before the speed of light becomes a factor.

Fast forward to the 2020s, and a generation of gamers, artists, and film-makers all want to do the same thing – only this time there are way more pixels, because we now have HD and 4k (and some people have multiple), and for most of them, it’s 60 frames per second, or it’s not worth having. Today we have around 12x the number of pixels, and around 3x the frame rate compared to TV of circa 2007. Fortunately, networking improved a lot in that time: a high-end user’s broadband connection grew around 60x in bandwidth, but the 120x growth in computing power really tipped the balance in favor of bringing remote streaming to the masses. Still, physics remains, meaning the latency forced on us by the curvature of the earth and the speed of light, is still a challenge.

The final element for this also started to fall into place in 2007: the new Amazon Web Services offered a compute service (Amazon EC2) and internet-based storage (Amazon S3). They were new and still finding their way, and today have over two-hundred siblings in the AWS services catalogue, spanning the globe in 80 Availability Zones (with even more to come). We still haven’t beaten physics, but we’re making up for it by building our own global fiber network and adding more machinery (and in local and wavelength zones) to get closer to more customers as soon as we can.

The age of artists

Supercomputing regularly pushes the limits of what’s possible in computing. We love working in this community because the pressure drives us to keep finding new solutions to hard problems – benefiting everyone in the products and services that follow. That same Netflix that sent gigabyte-sized movies to subscribers on DVDs through the mail is now completely cloud-based, and today ships pixels to artists and film-makers on their staff using DCV. Their creatives are the high-performance users of the 2020’s, doing post-production (like editing, animation and special effects) on powerful GPU-based workstations in the cloud, from thin clients in remote working locations all over the globe. In a world driven indoors by Covid-19, Netflix maintained a pace of content production necessary to entertain hundreds of millions of viewers by extending the perimeter of their post-production suites using DCV.

At the core of DCV is a bandwidth-adaptive streaming protocol that allows it to deliver near real-time responsiveness to users without compromising on the accuracy of the image (or the multi-channel audio). In the latest release (DCV 2021.1) we tuned the code again, this time to drive our audio/video synchronization tight enough that an artist or editor can’t detect a gap (and they’re a tough crowd).

Netflix were one of the first users of the new DCV Web Client SDK, which lets their portal system for workflow conjure up completely remote virtual applications running on high-performance graphics instances in the cloud inside a web browser. They use the AWS global network infrastructure to push packages of content and editing requests to AWS Regions or Availability Zones closest to where their post-production teams live. They even automatically scale up fleets of these instances when they see more creatives coming online, so no one needs to wait for one of those high-powered graphics workstations.

Video recording of Michelle Brenner from Netflix explains how they’re using DCV as part of a custom-built service that lets their creatives work from home with some serious resources.

Michelle Brenner from Netflix explains how they’re using DCV as part of a custom-built service that lets their creatives work from home with some serious resources

The age of gamers

Streaming games, on the other hand, requires a different set of skills, because it’s way more likely we’re streaming to a consumer device, in a domestic setting where the quality of the internet connection is highly variable. This forces some different trade-offs for dealing with congestion or lost packets, both of which are extremely normal on the internet.

Buffering is pretty awful for gamers, so is pixelating the imagery. A little more acceptable is dropping a frame (or part of a frame) here and there, since with 60 of these flicking by every second, the human eye has a hard time keeping noticing when one is missing. All motion pictures, from the first flip books you made as a kid, through to full motion picture block busters, rely on the human eye not being able to keep up with the rate at which a series of motionless images bounce photons into your retina. Watching your favorite show on TV for an hour means you’re likely seeing more than a quarter of a million such images wash past your eyeballs. It’s stop-motion, at high speed.

Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds

TCP/IP, however, insists on reliable delivery of its payload datagrams. This is a lot like the US Postal service motto “neither snow nor rain … stays these couriers from the swift completion of their appointed rounds”. It’s a good thing in most cases, or you’d be reading fragments of email and left guessing what was said in the gaps where packets were silently dropped. To make that happen, TCP holds up sending a new group of packets (a ‘window’) until the group before it has been acknowledged as delivered by the receiver (this is called ‘head of line blocking’). To mask the latency of the internet which would be dominant if it were just single-packet send-receive ping-pongs, TCP adjusts the size of this window depending on the reliability it’s experiencing over time. It’ll expand the window when conditions are good, and contract it when it must retransmit too many lost packets. This mechanism (the ‘TCP sliding window’) has underpinned the internet since the very beginning, and works extremely well.

However, for users with latency sensitive applications (like a twitchy gamer playing an MMORPG like New World), a single packet being dropped can really kill the moment (or, in fact, really kill their character in the game). TCP will stubbornly account for that lost packet, and resend it, but that will likely take hundreds (or even thousands) of milliseconds, by which time some serious game playing action has been missed. And a few seconds of congestion (or your mom starting a video conference with her team at the office) might cause the TCP window to contract and then it’ll take many, many more seconds before it expands again. Small glitches like this can lead to unstable connections, which means unpleasant game play.

This led the DCV developers to adopt QUIC, which was (at the time) an experimental Transport Layer protocol, on the same level of the stack as TCP. QUIC behaves like a hybrid of TCP and UDP. To the devices at each end of the connection, it looks and behaves very similarly to conventional models. But underneath, it’s spraying the packets that need to be delivered across multiple simultaneous connectionless UDP-like streams.

Since those streams act independently of each other the chance of a packet loss causing all the others to stall behind it is radically reduced. Since they’re connectionless streams, a packet being lost will only show up as a missing video artifact at the receiving end (the end user’s display). DCV’s engineers work around this too, using as many parts of a frame that make it through as possible, ignoring the parts that don’t. Your retinas smooth over the gaps. This pretty much eliminates the ‘buffering’ experience caused by head-of-line blocking. You can see this in action with three cuts of a game being streamed over three different transports: two QUIC-based transports (to highlight improvements between DCV releases) compared against a baseline experience with TCP. The difference is astonishing – see for yourself:

This video shows three different scenarios of protocol-choice and packet loss side-by-side to show how these factors can lessen the impact of internet traffic conditions.

This video shows three different scenarios of protocol-choice and packet loss side-by-side to show how these factors can lessen the impact of internet traffic conditions.

Happily, in May (just a few weeks ago) the IETF[1] adopted QUIC as a standard, in RFC 9000, making it official. QUIC also has some other great features like built-in Transport Layer Security (TLS), which makes encryption pervasive and without any set-up/tear-down costs when establishing connections. It’s likely you’re reading this article on a browser that already supports QUIC. You can expect to see it in more places.

The next best thing to being there

There’s nothing like having the entire planet start working from home to remind you that there’s still more to be done. DCV’s use of clever protocol techniques and graphics acceleration make it possible to stream an application from a remote server to a user’s desktop without feeling the distance. But if that application can’t print a document to the printer next to your desk, the mirage vanishes.

For that reason, Covid-19 pushed us to close several last-mile feature gaps like these, so we could support the vast numbers of customers turning to DCV to support their work from home needs. You can now plug in a thumb drive, authenticate with a company’s smart card, and print to your local printer. If you’re an artist, you can use a stylus-based tablet.

And if you don’t want to assemble a solution yourself, Amazon AppStream 2.0 is a fully managed desktop and application virtualization service that lets your users securely access the data, applications, and resources they need, anytime, from almost anywhere. It’s built on DCV, too.

The default application streaming protocol for the cloud

DCV has come a long way from its roots in supercomputing. But we’re only just scratching the surface of its next phase of life, involving new extreme cases … to add to the suite of extreme cases it’s already built to handle. Spraying packets across multiple paths in order to reduce head-of-line blocking isn’t new to us in HPC (you can see how we leverage that idea in our Elastic Fabric Adapter for scaling HPC codes), and you can bet on DCV just getting better as we invent our way around obstacles placed in front of us – often by the laws of physics. DCV is still widely deployed (and widely available) in the non-cloud, on-premises world. We’ll keep taking what we learn from there and iterating on these ideas to make sure DCV is the default and obvious solution to enabling remote access to visual resources in the cloud especially.

If you want to get started with DCV, it’s as easy as launching an AMI from AWS Marketplace, or downloading the installable packages from our software download site. On premises, it requires a license but includes an automatic 30-day trial license that you can use to get going immediately. In the cloud on AWS there’s no charge for its use – you just configure your AWS IAM role to access the cloud license. Let us know what you invent.

[1] The Internet Engineering Task Force – the internet’s standards body.

Brendan Bouffler

Brendan Bouffler

Brendan Bouffler is the head of the Developer Relations in HPC Engineering at AWS. He’s been responsible for designing and building hundreds of HPC systems in all kind of environments, and joined AWS when it became clear to him that cloud would become the exceptional tool the global research & engineering community needed to bring on the discoveries that would change the world for us all. He holds a degree in Physics and an interest in testing several of its laws as they apply to bicycles. This has frequently resulted in hospitalization.