AWS HPC Blog

Tag: NICE DCV

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Since we published that post on this blog channel, we’ve been asked by several customers whether all this efficient pixel-pushing could lead to outbound data charges moving up on their AWS bill. We decided to try it on your behalf, and share the details with you in this post. The bottom line? The charges are unlikely to be significant unless you’re doing intensive streaming (such as gaming) and other cost optimizations (like AWS Instance Savings Plans) that will have more impact on your bill.

Pushing pixels with NICE DCV

NICE DCV, our high-performance, low-latency remote-display protocol, was originally created for scientists and engineers who ran large workloads on far-away supercomputers, but needed to visualize data without moving it. Pushing pixels over limited bandwidth across the globe has been the goal of the DCV team since 2007. DCV was able to make very frugal use of very scarce bandwidth, because it was super lean, used data-compression techniques and quickly adopted cutting-edge technologies of the time from GPUs (this is HPC, after all, we left nothing on the table when it came to exploiting new gadgets). This allowed the team to create a super light-weight visualization package that could stream pixels over almost any network. Fast forward to the 2020s, and a generation of gamers, artists, and film-makers all want to do the same thing as HPC researchers- only this time there are way more pixels, because we now have HD and 4k (and some people have multiple), and for most of them, it’s 60 frames per second, or it’s not worth having. Today we have around 12x the number of pixels, and around 3x the frame rate compared to TV of circa 2007. Fortunately, networking improved a lot in that time: a high-end user’s broadband connection grew around 60x in bandwidth, but the 120x growth in computing power really tipped the balance in favor of bringing remote streaming to the masses. Still, physics remains, meaning the latency forced on us by the curvature of the earth and the speed of light, is still a challenge. We still haven’t beaten physics, but we’re making up for it by building our own global fiber network and adding more machinery (and in local and wavelength zones) to get closer to more customers as soon as we can.

Figure 1. A map of the USA showing the location of RITC participants and instructors for the Spring 2021 RITC Workshops.

Training forecasters to warn severe hazardous weather on AWS

Training users on how to use high performance computing resources — and the data that comes out as a result of those analyses — is an essential function of most research organizations. Having a robust, scalable, and easy-to-use platform for on-site and remote training is becoming a requirement for creating a community around your research mission. A great example of this comes from the NOAA National Weather Service Warning Decision Training Division (WDTD), which develops and delivers training on the integrated elements of the hazardous weather warning process within a National Weather Service (NWS) forecast office. In collaboration with the University of Oklahoma’s Cooperative Institute for Mesoscale Meteorological Studies (OU/CIMMS), WDTD conducts its flagship course, the Radar and Applications Course (RAC), for forecasters issuing warnings for flash floods, severe thunderstorms, and tornadoes. Trainees learn the warning process, the science and application of conceptual models, and technical aspects of analyzing radar and other weather data in the Advanced Weather Interactive Processing System (AWIPS).