AWS for M&E Blog

The new world of watching machine learning-enhanced streaming sports

Gone are the days when watching sports on television meant turning on the one and only broadcaster which owned the rights to televise an event. Several sports-specific OTT offerings have launched in the last year with broadcasters realizing that sports are the last bastion of streaming video for which viewers are willing to pay a premium. Indeed, 63% of sports fans say they would pay for an OTT subscription, and 50% of sports fans watch supplemental sports programming [i]. However, unless you lock in the rights to stream an event the equivalent of the Super Bowl, how do you differentiate yourself to attract and engage viewers? The answer could be machine learning. In this blogpost, we’ll explore how machine learning might just be the tool you didn’t know you needed.

Increase Personalization and Accessibility

At its most basic, machine learning could be used to automate manual processes to streamline a workflow, ensure redundancy, and lower costs. A great everyday example is with captioning and translations. Today’s speech-to-text workflows involve a third party service provider creating the captions to accompany a live event. Services such as Amazon Transcribe Live can do the same not only at a fraction of the cost but with the resiliency and scalability of the cloud. Additionally, once the captions are readily available, translating that into multiple foreign languages is equally simple to accomplish with Amazon Translate. As a follow-on, a text-to-speech service such as Amazon Polly can be used to create a foreign audio track, all in real-time. So individuals watching the Olympics in Korea, Russia, Germany, France, and Spain no longer have to wait for the translated broadcast but can watch in their native language in simulcast.

Get Your Fan Fever On

For England’s Royal Wedding in 2018, the UK broadcaster Sky News launched a “Who’s Who” functionality to highlight celebrities as they appear on screen and present details about their connections to the royal family. Similarly, the Media Analysis solution – also powered by Amazon Rekognition, AWS Elemental MediaConvert, Amazon Transcribe and Comprehend – trains Rekognition with uploaded player images, deploying facial recognition to identify and track players. Better yet, because the analysis can track any location where a player appears in a video, sports fans can jump to any location in the video where their favorite player appears.

Shown by permission from NEP Switzerland and VCS Switzerland

Tracking shots of players can surface updated player or game stats so fans are better informed and more engaged.

Advanced Highlight Reel Creation

Now imagine automating in real time the compilation of the most interesting highlights for a post-game review. In traditional broadcasting workflows, compilations require a staff member to mark the start and end points of the clip, take the action of clipping the video, storing the video in a repository, and then tagging the video with appropriate metadata so that it can be available through a content management system. Machine learning automates all of that. For example, highlight reels mostly include clips of touchdowns, runs, or baskets. Using Amazon Rekognition trained on changes to the scoreboard, you can identify those key scoring moments, then execute live clipping of video segments from different camera angles so that a viewer can later watch any angle.

Deliver the Stats

For those who want to geek out on stats, both the NFL and the MLB have deployed AWS’ machine learning services to provide their fans deeper insights into their games. NFL player tracking, known as Next Gen Stats, uses RFID to capture real time location data, speed and acceleration for every player, every play on every inch of the field. With the computational assistance of Amazon Sagemaker, Next Gen Stats reports on a variety of data like real-time location, speed, and acceleration. In the future, Next Gen Stats will leverage machine learning solutions to predict offensive and defensive formations, routes, and key events in games.

Similarly, MLBAM (MLB Advanced Media) Statcast uses machine vision and radar tracking of player and ball movement to give fans, broadcasters, and teams a better understanding of the dynamics and athleticism of the game of Major League Baseball in real-time. MLBAM estimates it produces about 7 terabytes of data per game and 17 petabytes of data in a single season [ii].

Put on Your Machine Learning Game Face

Artificial intelligence applied to sports could run the gamut from interest-based real-time content recommendations – such as those based on your favorite team or fantasy football league – to live VR-enhanced gaming platforms such as LiveLike which puts you in a virtual stadium with your friends. The commonality of most of these solutions is machine learning. When used to enhance the viewing experience, machine learning can give live sports broadcasters the competitive advantage to engage its audience without incurring significant cost or modification to broadcast workflows.

To learn more about how to use the AWS cloud to deliver great fan experiences, download our latest eBook today or visit the live sports application page.

——————————————

[i] https://www.prnewswire.com/news-releases/usc-annenbergthepostgame-genz-and-younger-millennial-sports-fans-are-driving-fundamental-changes-in-programming-platforms-and-purchasing-300271545.html
[ii] https://pages.awscloud.com/rs/112-TZM-766/images/DataDrivenMedia_eBrief_March2017_Final.pdf?sc_channel=em&sc_campaign=himss_2017&sc_publisher=aws&sc_medium=em_&sc_content=event_ev_field&sc_country=us&sc_geo=namer&sc_outcome=event&trk=em_