AWS for M&E Blog

How AWS live streams re:Invent

AWS re:Invent is the largest event of the year for Amazon Web Services and the keynote addresses are the centerpieces of the week-long event. Live streams from re:Invent command a large viewership from around the world and are the highlight of the year for the AWS Event Technology Team. In this blog post, we describe how AWS moved 90% of its live streaming workflow to the cloud to provide a more cost-effective, reliable, and flexible live event for viewers and customers as the number of AWS events grows.

In 2018, the AWS Event Technology Team used a new rate control tool, Quality-defined Variable Bitrate (QVBR) from AWS Elemental to deliver live video streaming of re:Invent sessions to a worldwide audience. QVBR enabled the team to reduce its streaming budget by more than 20 percent compared to prior years. Building on that success for 2019, we re-architected the workflow to include AWS Elemental MediaConnect for contribution, AWS Elemental MediaLive for encoding, and AWS Elemental MediaPackage for video packaging. The use of these services enabled delivery of content to more destinations with the same contribution stream and to dynamically scale with viewership demand more efficiently than with the previous on-premises workflow used for AWS live event coverage. For AWS re:Invent 2019, AWS Media Services performed the majority of the video processing in the cloud with only redundant contribution encoders remaining on-premises.

A shot from the live stream of Andy Jassy, CEO of Amazon Web Services, delivering the 2019 re:Invent keynote address in Las Vegas.

A shot from the live stream of Andy Jassy, CEO of Amazon Web Services, delivering the 2019 re:Invent keynote address in Las Vegas.

For example, during AWS re:Invent 2019, live streams were delivered using AWS Elemental encoders on-premises and AWS Media Services, the Amazon CloudFront CDN, AWS Lambda serverless compute, and Amazon Simple Storage Service (Amazon S3). This allowed for even more cost savings by eliminating the need to send multiple renditions from the event venue while also implementing complete redundancy into the workflow for caption insertion, ABR creation, and content delivery.

We began with a step-by-step plan, and then introduced one workflow segment at a time, testing each element. After we completed all testing, we combined the workflow segments and prepared the solution for a target test during an actual AWS event — the AWS Summit held in Sydney, Australia in April 2019. We conducted the first parallel test of the new workflow and the previous workflow, applying load on the new workflow’s endpoints in real-time to simulate viewership. While we made some minor adjustments during that test, it was an overall successful use of the new architecture. To prepare for that, we applied the cloud workflow created for a mainstage keynote, “Sports Broadcasting With The Cloud”, which was produced and distributed at the 2019 NAB Show. Once this version of the workflow was set up, we were able to fine tune it and apply additional tools like our HLS regional switcher and caption insertion solution.

After the target test concluded, we found that the new workflow reduced on-premises costs. Previously, we generated the HLS stack on-premises using AWS Elemental hardware encoders. Using AWS Media Services, we instead sent two large mezzanine streams into AWS Elemental MediaConnect in two different AWS regions for redundant ingest. This meant we were able to more reliably transport live video to the cloud at a lower cost because we no longer needed large ISP connections at the venue.

Once the streams were in AWS Elemental MediaConnect, we pushed them to two different AWS Elemental MediaLive deployments in their respective regions. This allowed us to easily duplicate the primary workflow in about 30 minutes. The first AWS Elemental MediaLive deployment created a high quality RTMP stream that was sent to our captioning vendor in order to generate captions for the stream. The second AWS Elemental MediaLive deployment created our HLS adaptive bitrate stack with a clean, non-captioned source as a backup along with the RTMP stream source from the captioning vendor with embedded captions.

From there, we were able to cross connect the two regions at the AWS Elemental MediaConnect level so we could switch the sources in the cloud if there were any errors at the encoder or ISP level. This let us easily switch between sources at the transcoder, allowing for a speedy change with minimal impact to the viewer if a switch were needed. It is important to remember that once your video is in the AWS cloud, you are really only limited by your imagination (or budget) as to what you can do with it.

To create a fully redundant workflow with high availability, we used an AWS Elemental MediaLive standard deployment, which includes a redundant pipeline architecture. This allows the workflow to be spread across multiple AWS Availability Zones (AZs) for stream reliability and uptime. The adaptive bitrate (ABR) stack then egresses both pipelines simultaneously to their respective AWS Elemental MediaPackage deployment in that region. MediaPackage also has a dual-pipeline architecture, so you have the same redundancy built into it as you do with the MediaLive. This ensures stream reliability across the workflow. If you are streaming a lower priority show and want to save up to 40 percent on cost, you can use the AWS Elemental MediaLive single pipeline configuration instead.

In order to reduce latency in the transcoding step, we deliver the content to MediaPackage in two-second segments. This is a little too aggressive for most HLS video players, so once the content was in AWS Elemental MediaPackage, we repackaged the content to more manageable six-second segments. This allows for more reliable delivery to viewers.

Live event streaming workflow with redundant pipeline architecture

Live event streaming workflow with redundant pipeline architecture

Encoding and packaging are just one half of this solution. The other half of the workflow involves an open source project called Clustered Video Streams. This nifty solution allows us to switch between HLS streams in two different regions with minimal impact to the viewer. Clustered Video Streams lets us provide a fully redundant workflow from encoding to player. Refer to the Github project for more information and detail about Clustered Video Streams.

Remember we mentioned the lack of limitations that media workflows in the cloud provide? With AWS Media Services, we can send as many outputs as needed from the transcoding instance. With traditional on-premises approaches, those additional outputs may not have been possible due to hardware and connectivity constraints. AWS Elemental MediaConnect for live video transport lets you send Zixi Push or Pull streams, RIST streams and RTP with FEC directly from the deployment. This provides the benefit of a cloud-based conversion device without the addition of a transcoding step. You can also set up entitlement rights to receivers and have their account billed for the egress costs. With MediaLive encoding, we send out both an HLS transcoded ABR stack as well as an RTMP stream to our Twitch channel. All without worrying about capacity or connectivity limitations.

The almost entirely cloud-based workflow with AWS Media Services was used in full production for the first time in July, 2019 for AWS re:Inforce in Boston, Massachusetts. In October, 2019 after more testing, we supported the successful proof of concept of a workflow with the National Aeronautics and Space Administration (NASA) that marked the first use of cloud resources for data storage and video origination for streaming from space. Each of these milestones proved our workflow could handle rigorous live event demands, including the types of events that AWS produces. If it can work to deliver live interviews from the International Space Station, then it works for our biggest events.