Improving Video Quality

Why Video Quality Matters to Viewers & Video Providers

Today’s audiences want more content on more devices, with high-quality video and low, broadcast-like latency that allow them to enjoy content without delay. OTT video viewers also demand features such as enhanced graphics and real-time stats, particularly for sports, that go beyond traditional offerings and capabilities.

Likewise, for many video providers, achieving and maintaining superior video quality for streaming content is a top priority. However, the path to better images and improved playback can often lead to unsustainable increases in processing and distribution overhead.

Current compute, storage, and bandwidth logic maintains that the better video quality is, the more costly it is to process, store, and deliver. But new video quality-focused technologies are changing this long-held industry tenet.

Defining Video Quality Basics: Viewer Expectations & Provider Considerations

Delivering a consistent, high-quality video viewing experience comes with a distinct set of challenges video providers must plan and design for. Here’s a short summary of technology functions and components that have an impact on video quality:

  • Field network or cameras
  • Infrastructure disruption on video encoder
  • Network connectivity for video contribution
  • Infrastructure disruption on video packager
  • Fluctuating player bandwidth

Beyond the technical considerations, video providers understand that delivering an excellent viewing experience is one of the most important criteria for making live video on the internet feel like a can’t-miss event. In the eyes of viewers, video content availability and video quality are equally important.

Let’s explore what this means.

Audiences expect video to be available to view on any device they want to use—that it’s not restricted to a particular type of mobile device, computer monitor, or TV screen, or a certain set of functionality requirements.

Audiences also want to be able to watch content anywhere, whenever they want to watch it, meaning that video providers need to provide the highest video quality at the bitrates that are available to their customers, no matter where they are.

To enable this, four conditions must be met:

  1. Providers need to be able to deliver the highest video quality to every customer’s device, regardless of what a customer’s bandwidth is at any given moment. And because bandwidth fluctuations create a challenging environment for video quality consistency, providers need to optimize for that condition too.
  2. As providers maximize video quality, they must minimize buffering; nothing ruins a customer’s streaming experience more than having to endure video buffering.
  3. Regardless of what might fail upstream, providers need to make sure their customers receive an uninterrupted viewing experience.
  4. Particularly for providers operating in the live sports realm, video latency is a major influence on video quality and the overall viewing experience. All of the information transmitted must be in front of the viewers’ eyes as quickly as possible, without noticeable delays. (For more detail, see What is Latency in Live Video Streaming?)

Designing for Scale & Video Quality Reliability: Differing Definitions + Suboptimal Situations

The ability to scale operations as needed is important to the viewing experience too, but when video providers refer to designing for scale, it can mean different things.

  • For some providers, scale means a high-profile live event with millions of viewers, all watching simultaneously for a set period of time.
  • For other providers, scale means having thousands of concurrent running channels with many choices available to serve a more distributed viewership.
  • And to another group of providers, scale means providing for high-profile live events with high viewer rates while concurrently running hundreds or thousands of channels.

In addition, today’s video providers need to consider that their customers are likely to walk around with their devices, into elevators and stairwells, encountering Wi-Fi or cellular signal drops at any given moment.

Video providers also need to make sure that when hardware failures happen they are completely imperceptible to viewers. And providers also need assurance that the field networks providing their live feeds are reliably and consistently getting the data into the cloud.

Techniques for Dealing with Video Quality Challenges

Let’s look at some methods video providers can use to manage challenging conditions and deliver a consistent, high-quality video viewing experience. To illustrate, we show an end-to-end workflow composed of AWS Media Solutions.

In a typical live streaming workflow, cameras are in the field, moving data to a network that sends data to AWS, where video transcoding occurs, generating different types of outputs and bitrates that conform to the various formats and bandwidths available to end viewers.

The next destination is the packaging and origination service where digital rights management, encryption, and packaging to support HLS, DASH, MSS devices, and more, occur.

Following that, it’s off to one or multiple CDNs.

Waiting at the end of the bit journey are the OTT devices and their end users—the viewers.

In this bit journey, video providers can take advantage of a few choice tools designed to maintain video quality consistency. One of the most powerful video quality tools available is Quality-Defined Variable Bitrate, or QVBR.

How Does QVBR Improve Video Quality? A Technical Talk-Through

Quality-Defined Variable Bitrate (QVBR) performs two critical actions: It minimizes buffering on the player side, and it makes the highest bitrates available—and higher-quality video—to a much broader population of viewers.

Here’s how QVBR does it.

QVBR allows an operator to specify a parameter to express how aggressive they want to be in taking advantage of scene simplicity over time. The operator justifies the quality, and based on that tuning, they’re able to take advantage of the “dips” as they come in.

This particular capability has allowed AWS Media Services customers to save up to 35% of their CDN costs at the highest bitrates.

In addition to the CDN savings and the storage savings for VOD—and the ability to scale to more viewers because the operator is using a much lower bitrate overall—it also minimizes the buffering on the end player, which is a critical function. If a video player’s most important job is playing, its second-most important job is not buffering; the player needs to do whatever it can to not run out of that front buffer.

But every once in a while bandwidth jitter occurs, causing a six-second buffer to drop to four or two seconds. When this happens, if all the player has access to is the bandwidth that capped bitrate is set to, the player will never be able to regain ground lost on that particular front buffer.

Using QVBR, when dips come into the play, the player gets windows of opportunities where it can extend a front buffer when simple scenes come in, giving the player breathing room, and maintaining a high video quality experience for viewers.

QVBR comes with several AWS Elemental live and file-based products and services at no additional cost or licensing fee:

For more technical detail, see How to Use QVBR for Streaming Live Events.

AV1, Accelerated Transcoding, Statmux + More Techniques for Improving Video & Audio Quality

How one defines video quality changes as quickly as technological discovery and invention allows. Higher resolutions provide more detail, High Dynamic Range (HDR) options create an immersive viewing experience, object-based audio formats continue to evolve, and new codecs make the aspiration to provide the best quality video a moving target.

Let’s focus on a few advancements that keep video providers evolving at the pace of audio and video technology innovations.

MediaLive supports 4K/UHD, HEVC streams with HDR. Encoding with HEVC (H.265) offers a number of advantages. While UHD video requires an advanced codec beyond AVC (H.264), high frame rate (HFR) or HDR content in HD also benefit from HEVC’s advances in compression efficiency. In addition, benefits can be achieved with HD and SD content even if HDR and HFR are not needed. HEVC is 30-50% more efficient than AVC, resulting in significantly lower storage and Content Delivery Network (CDN) costs.

This means video providers can improve video quality while reducing distribution costs, and be prepared to make the leap to next-generation UHD and HDR content.

Using MediaConvert, video providers can encode video using the AV1 (AOMedia Video 1) codec, which offers higher compression efficiency than the AVC and HEVC codecs. With AV1, delivering high-quality SD and HD video to mobile and other devices over congested or bandwidth-constrained networks is achievable at bitrates that traditional codecs can't attain.

Because AV1 is a compute-intensive codec, providers using MediaConvert also receive the benefits of Accelerated Transcoding, which leverages the power of parallel processing in the cloud to convert video files for on-demand viewing in a fraction of the time previously required. When used for AV1 jobs, Accelerated Transcoding processing speeds reach real-time, allowing providers to meet the most demanding turnaround times for the most advanced codec.

On the audio side, MediaConvert supports processing for up to 64 audio tracks and encoding into the immersive Dolby Atmos format. On top of the existing 4K/UHD resolution, HDR10 and HLG HDR support, video providers can now encode up to 8K/UHD resolution content and process video with Dolby Vision for HDR.

And when delivery of video requires primary distribution of live video content over fixed-bandwidth pipes, video providers can use MediaLive to create a managed, highly-available multi-program transport stream (MPTS). This MPTS is available for broadcast distribution over satellite, cable, or terrestrial networks using Statistical Multiplexing (Statmux) outputs.

For detailed benefits and to learn how Stamux works, see Statmux for AWS Elemental MediaLive.