AWS for M&E Blog

How AWS streamed re:Invent 2020

re:Invent is the largest event of the year for both Amazon Web Services (AWS) and those of us on the AWS Event Technology Team. Live streams of the leadership sessions boast roughly 500% more viewership than other AWS events. In 2020, due to the impacts of COVID-19, re:Invent pivoted from an in person- to virtual for the first time ever. This placed an even greater emphasis on the live streaming services and hardware to ensure the reliable delivery of the content at scale. In this blog post, I explore how AWS used an updated cloud workflow to deliver the 2020 event. I cover the onsite encoding and architecture that made it possible for AWS to deliver live coverage of all the re:Invent 2020 live content.

 

In 2019, I worked with the AWS Event Technology and AWS Elemental teams to migrate the entire re:Invent content delivery workflow from legacy solutions to AWS Media Services. It was a year-long project that included building new workflows, testing, shadowing operations, and finally switching completely to the new solution for the 2019 event. That migration allowed the AWS Event Technology team to take advantage of the benefits of the AWS Media Services, scale delivery needs, reassess the content delivery plan, and explore new features and outputs like cloud managed closed captioning and automated failover for the AWS re:Invent team.

Along with the decision to deliver a 100% virtual event, there were additional content changes, like expanding the event 10-days over three weeks. This meant streaming more content than ever, including adding first-time streams for the Leadership Series and Executive Summit. The other major change was the requirement to replay previously live elements of the show in a follow-the-sun delivery model. Due to these changes and overall scope of work, I had to adjust how I approached this event from years past. I decided to look at it more like a large, global sporting event with 24-hour content distribution for three weeks, rather than a five day show like in years past. This led to onboarding a new content delivery platform, using multiple studios and production control rooms, a broadcast control operation to manage the inbound and outbound signals, and adding closed captioning on the fly. All of this combined, enabled us to have a truly global team work together to deliver this event.

A screenshot from AWS CEO, Andy Jassy as he presented on stage for the 2021 re:Invent Keynote presentation.

A screenshot from AWS CEO, Andy Jassy as he presented on stage for the 2020 re:Invent Keynote presentation.

re:Invent Keynotes are usually hosted at the Sands Expo Center in Las Vegas, so the first change was venue, with the 2020 keynotes streaming from Seattle. This meant a new operating plan was needed and required us to rebuild the entire process of show delivery. As Andy Jassy frequently mentions, “The key to reinvention is a combination of building the right reinvention culture and then knowing what technology is available to you to make that reinvention change and use it.” The culture of change at AWS, and our migration to adopt new technology in 2019, enabled us to pivot to this new format. With the help of multiple teams across the company, we put together three main production studios, with three production control rooms in different locations around the Amazon campus. Studio 1 was our main Keynote hall, Studio 2 was used for Leadership Sessions and other presentations, and Studio 3 was used for public relations, analyst relations, press events, and Executive Summit production. All three of these studios and control rooms fed content to our cloud broadcast control system, provided by Corrivium.

Due to safety protocols, and the need for our global team to manage content delivery over an entire 24-hour period, the decision was made to work with our vendor, Corrivium, to build a cloud-based broadcast control system using the AWS Media Services and additional vendor software. This allowed us to maximize flexibility, prioritize the safety of personnel and our presenters, and have teams back each other up from remote locations. With the cloud-based broadcast control in place, we could then send content to the cloud.

If you’ve followed my journey of blog posts delivering re:Invent over the last four years (see the 2018 and 2019 posts), you can see that I’ve worked with our teams to relentlessly innovate, create a better workflow, and improve resiliency so we can raise the bar on your viewing experience. Over that time, onsite not much has changed because I used our AWS Elemental Live appliance encoders to create a high-quality mezzanine stream to contribute into our cloud environment. While these are great encoders (especially for 4K content), we had to work within our existing, smaller production spaces in Seattle. This meant it was a perfect use-case for AWS Elemental Link, a contribution encoder that is quiet, with low power consumption.

A look at the limited amount of space we had to work with and how the Link devices fit.

A look at the limited amount of space we had to work with and how the Link devices fit.

AWS Elemental Link, launched in May of 2020, is a small device that connects live video sources (via SDI or HDMI input) and sends them directly and securely to AWS Elemental MediaLive with easy-to-deploy, plug-and-play connectivity so you can start streaming quickly. We used 10 AWS Elemental Link devices across our campus to act as pipelines, delivering mezzanine-quality streams to the AWS Cloud for routing, monitoring, transcoding, and delivery. My strategy with the Link devices was to place two link devices at each video source and ISP, with each one streaming through a different ISP for redundancy. When you couple two Link devices with a Standard AWS Elemental MediaLive Channel, you have a redundant input solution that uses the automated failover and recovery features in AWS Elemental MediaLive.

Once the stream is delivered to our cloud environment, we send that output to our cloud-based routing solution built on AWS Elemental MediaConnect, where we routed streams to different delivery paths. These delivery paths included a proxy stream for monitoring, a live closed captioning creation and insertion option, transcoding and finally, packaging for delivery.

A high-level overview of the live and replay streaming architecture used to deliver re:Invent 2021.

A high-level overview of the live and replay streaming architecture used to deliver re:Invent 2020.

We also deployed this solution in two Regions to ensure redundancy and failover, with rules in place to allow the streams at the video player to failover to a backup Region if an issue occurs in the primary Region. Within each Region, we used MediaConnect to route inbound signals to different destinations. In some MediaLive deployments, we used the input switching option to route streams, similar to a traditional on-premises video router. Most of the time we used MediaLive’s Standard Channel configuration since the two Link units worked together, across multiple ISPs, to redundantly stream our MediaLive channels. The Standard Channel configuration in MediaLive allows for automated input switching in the event of an input loss or failure, and allows you to prioritize the inputs. For example, if the primary Link encoder were to lose connectivity, MediaLive switches to the second Link encoder feed. When the primary Link recovers and reconnects, MediaLive switches back to the primary feed because we told it to do so in the prioritization settings.

Another use of this feature was for the AWS ON AIR broadcast team. The AWS ON AIR content originated from different locations around the world, but had to stream to the same output channel. We set them up with a primary and secondary configuration in MediaLive so when they streamed to the primary input and turned off their show in one location, the secondary input was streamed in its place. The secondary input in this case was an mp4 file stored on Amazon Simple Storage Service (Amazon S3) for MediaLive to loop. With the prioritization rule in place, when the primary stream returned, MediaLive switched back to that feed to deliver their show.

A screenshot of the Automatic Input Failover Settings in MediaLive.

A screenshot of the Automatic Input Failover Settings in MediaLive.

MediaLive allows you to set failover conditions based not only on input loss, but also based on “video black detection” and “audio loss detection”. These automated features allow you to maximize the automation built into the services for “always on” delivery.

Once content was ingested and routed, it then went to our cloud-based closed captioning solution where human and ML-generated captions were inserted into the streams and sent back to the receiving MediaLive deployment. In that deployment, we again used MediaLive’s input failover solution and assigned the captioned stream as the primary and non-captioned stream as the backup. This was a great addition to our workflow and allowed us to automate input source switching based on the parameters we assigned to the conditions. In the transcoding deployment, we used an adaptable bitrate (ABR) configuration with seven bitrates and final stream configuration for delivery to AWS Elemental MediaPackage for packaging, video-on-demand archive window settings, and distribution to make the stream available to the video players via the Amazon CloudFront CDN.

In the last step of the workflow, we used MediaPackage to provide just-in-time packaging via HLS with 6-second segments. We also used MediaPackage’s “Live Playlist Window Duration” tool to assign a DVR window in the stream so viewers could scrub back and rewatch or listen to a segment within a 30-minute window. When used by a player with a DVR function, the keep window can be set to a wide array of durations for the same style of replay capability. Note that you need to request a limit increase in your account for a duration longer than 300 seconds.

A screenshot of where to adjust the DVR window settings in MediaPackage.

A screenshot of where to adjust the DVR window settings in MediaPackage. 

Finally, once routed through CloudFront, we delivered our live and replay content to the specified media players on the video platform.

Overall, re:Invent 2020 was a huge success. The step-by-step migration in 2019 put our team in a great position to iterate on the foundation we built and continue to reinvent how we deliver engaging and informative content to you, our customers, with the use of our own AWS Media Services and AWS Elemental Link devices. With the use of these cloud tools and the expertise of our product teams, we were able to create a globally managed content delivery solution from scratch in only a few months.

To learn more about AWS Elemental Link devices and to place an order, visit the product page. If you already have a Link device and want to get started streaming with just a few clicks, check out the Live Streaming on AWS with MediaStore solution.