AWS for M&E Blog
Building ATSC 3.0 workflows on AWS
Since 2015, television broadcasters have been migrating their channel origination workflows to the cloud. For those new to the concept, channel origination is the process by which prerecorded show segments, live content, advertisements, and graphics are combined to convert individual video assets into what we see at home as an assembled linear channel. But the process of assembling the channel isn’t enough—broadcasters must then deliver that channel to viewers. The extension of channel origination then becomes channel distribution, and our customers employ many methods to accomplish this. The methods include internet, satellite, cable, and over-the-air (OTA) delivery. This blog post is about the latter—specifically how our customers and Amazon Web Services (AWS) Partners can distribute content from AWS using the ATSC 3.0 standards. We have tested this successfully with several AWS Partners to date and are excited about the offering.
But first, here’s an overview of the Advanced Television Systems Committee (ATSC). According to its website, the ATSC “is an international, non-profit organization developing voluntary standards and recommended practices for digital terrestrial broadcasting.” And according to Wikipedia, the “ATSC was initially formed in 1983 to develop a first-generation digital television standard that could replace existing analog transmission systems.” The new digital system became known as “ATSC 1.0.” ATSC 1.0 is in use in the United States, Canada, Mexico, South Korea, Honduras, and the Dominican Republic.
The ATSC then developed a next-generation digital television standard known as “ATSC 3.0.” ATSC 3.0 was commercially deployed in South Korea in May 2017 and was approved for voluntary use in the United States in November 2017.
The ATSC 3.0 standard is a collection of methods that describe the encoding, packaging, and delivery of real-time video plus non-real-time (NRT) elements. Most notably, the ATSC 3.0 standard is based on Internet Protocol (IP), unlocking many new possibilities for broadcasters by adding the ability to distribute any data in real time, not just video. Several improvements on ATSC 1.0 are worth mentioning.
First, ATSC 3.0 facilitates a simultaneous distribution over the internet as well as over the air (OTA). Origins may serve segments to be delivered OTA and through a content delivery network (CDN). This way, viewers with internet connections can receive personalized programming and advertisements while others can receive the OTA signal. The standard lets televisions switch seamlessly between the two transport mechanisms.
Second, ATSC 3.0 provides the ability to transmit high-efficiency video coding (HEVC)–encoded video, which brings with it gains in raster size (1080p and 4K), as well as an improvement in quality and a reduction in bits per channel.
Finally, ATSC 3.0 enables the transmission of non real-time (NRT) elements such as out-of-band ads/video on demand (VOD), metadata, or even data that has nothing to do with video (referred to as datacasting).
So how does this work in the cloud?
All the components in an ATSC 3.0 system, except for the exciter, can run in software. The exciter is typically co-located with the antenna in a particular target market. Its job is to modulate the data to radio frequency (RF) and pass off the signal to the amplifier and antenna. An ideal ATSC 3.0 system is therefore one that can run its software elements close to the channel origination while having a minimal footprint on premises. Our customers have asked us to explore what this would look like on AWS. Consider diagram 1:
As you can see, there are several elements that make up the system. Let’s walk through them, beginning with the linear video. On the left, you see the station playout instance. This is the channel origination. Not shown are the inputs to it, which are typically a combination of network programming from the affiliated network, plus assets stored in Amazon Simple Storage Service (Amazon S3), an object storage service that offers industry-leading scalability, data availability, security, and performance, for non-network time. The output of the station playout instance is either a mezzanine stream or an uncompressed stream. This enters a distribution encoder, which compresses the stream to its user-defined bitrate.
Because this is ATSC 3.0, we get to use HEVC to make the most of our OTA bandwidth. You’ll notice the encoder also says multiplex—we’ll get to that in a minute.
This encoder has two outputs—one of which is a dynamic adaptive streaming over HTTP adaptive bitrate (DASH ABR) stack destined for the internet/CDN. This DASH ABR makes use of AWS Elemental MediaTailor, a channel assembly and personalized ad insertion service for video providers to create linear over-the-top (OTT) channels and to monetize those channels or other live and VOD content. The other output is also DASH, but it has a single bitrate. This signal is input into an ATSC 3.0 Real-time Object delivery over Unidirectional Transport (ROUTE) server that aggregates session information from the linear path as well as from the NRT paths at the top of the diagram.
Although we say the elements at the top of the diagram are NRT, the transmission thereof requires that the data does not overflow the available bandwidth of the overall ATSC 3.0 transmission. Therefore, there is a data-scheduling process here that lets the user schedule when they would like NRT data to be transmitted. The scheduler signals to the ROUTE server which files should be delivered and when. The files can be VOD assets that can be stored on end users’ devices, or they can be assets that have nothing to do with the video but need to be carried on the ATSC 3.0 signal. The specifications for the output of the ROUTE server are defined by ATSC RP A/351. If you’d like to learn more, the entire recommended practice is available here.
The last step we take before we leave the cloud is what’s known as the ATSC Gateway. This software takes the ROUTE stream and converts it to a studio-to-transmitter link (STL) at a fixed bitrate (although the elements within it are all variable bitrate). The STL is the stream format expected by the exciter at the antenna. Before we transmit the STL, it’s helpful to add some stream protection just as we would for any reliable video stream. Recently, AWS Partners have tested and proved that STL can be wrapped in the Secure Reliable Transport (SRT). For those unfamiliar with SRT, it provides encryption and retransmission methods to protect video streams in flight. You can learn more here.
Finally, recall we promised we would get back to the word multiplex. The process of multiplexing video traditionally involves taking separate elementary streams and combining them into a multiprogram transport stream (MPTS). A popular variant of this is a statistical multiplex (statmux), which refers to the combining of multiple channels to a fixed aggregate bitrate while permitting the individual channels within the MPTS to be encoded based on their relative complexity. For example, suppose you wanted to multiplex five channels into a fixed 19.2 Mbps MPTS. An equal bitrate per channel without statmux is 3.84 Mbps (19.2/5). However, some channels may be more important than others or may have higher visual complexity (such as high motion). Here, a statmux will score the complexity combined with a user-defined importance weighting. Each encoder participating in the statmux will be allocated a balanced number of bits to encode the frame. The sum of the allocation will be the user-defined total bitrate, which depends on the modulation technique used at the exciter. Now consider diagram 2:
Here we see that multiple station playouts are feeding into the encoder and multiplex. This would typically employ a statmux so that the user can improve the quality and weighting of each channel. One thing that ATSC 3.0 introduces that traditional statmux systems did not is the ability to use the statistical rate-controlled encoded streams inside DASH instead of an MPTS. In this respect, the term statmux is probably inaccurate in ATSC 3.0. What’s more accurate is that the “stat” part is still active, but there is no multiplex. Instead, each encoder outputs DASH segments with the statistically encoded video. The sum of the bitrates of the channels still equal the total bits allocated for video on the transmission, but the delivery is DASH. We should also note here that ATSC 3.0 provides more options than just DASH. Customers are welcome to use a statmuxed MPTS or an MPEG media transport (MMT) instead.
If you recall that our ROUTE server and ATSC Gateway have the combined job of scheduling the various elements into the STL, you see that delivery of separate streams—as is the case with DASH—is perfectly acceptable here because the elements are scheduled like any of the real-time or NRT pieces. Once more, the sum of the data must never exceed the defined STL/exciter bandwidth. If you keep that sum in bounds, everything stays happy.
So now we have a system that is entirely in the cloud except for the STL receiver and exciter (which can theoretically be combined into one device). We have customers testing this today using both AWS Partners and AWS Media Services, with which you can create digital content and build live and on-demand video workflows. Please get in touch if you’d like to discuss this further or to propose any interoperability testing.