AWS for M&E Blog

Using ML in the media supply chain to optimize content creation: How to improve efficiency for operations teams and automate workflows with ML metadata

Co-authored with Rose Sponder, Director of Marketing at SDVI

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post. 

The Challenge:

Use ML to assess and validate incoming content while helping manage re-versioning and distribution tasks.

Machine Learning (ML) and its application in the media industry is getting a lot of attention. Identification of objects and themes in archives, detection of objectionable or rights-restricted material, and identification of poor-quality content are frequently mentioned use cases. ML can enrich content descriptors across entire content catalogs, allowing consumers more sophisticated search parameters in their VOD experience.

So where do we start? How do we begin to handle newly gleaned metadata from content?  And how do we manage it in a way that truly helps a media organization do more?

Utilizing ML to speed up distribution

One global leader in Media & Entertainment serves content to fans across multiple screens and platforms, in 50 languages and more than 220 countries and territories. Programs are thoughtfully customized for viewers to meet the different cultural, technical and legal compliance standards of each territory. This level of customization results in hundreds of distribution versions for a single program. Multiply that by thousands of hours of original programming per year and you have an expensive challenge.

Already long-time users of the SDVI Rally™ Media Supply Chain Platform, this company had in place a highly optimized, elastic cloud-based media supply chain. All incoming content is automatically analyzed, assessed, and normalized to a house format in preparation for platform distribution. It also has a preferred toolset for hands-on content review: Adobe® Premiere Pro CC®.

As part of its ongoing engagement, SDVI was presented with a very specific challenge: use ML to assess and validate incoming content in a way that also helps operators manage tasks associated with international re-versioning and distribution. 

To meet this challenge, SDVI would utilize ML data in the context of work order management to accelerate the manual re-versioning processes. By creating an interface between Rally and Premiere Pro CC, ML data would be visible to operators and help guide them in the manual steps they need to perform on a given media file.

Automating Quality Control (QC)

One of the first steps in this automated supply chain is the technical assessment (known as quality control or QC) of a media file. The file is run through the customer’s choice of automated QC tools, each of which produces a comprehensive, machine-readable report detailing the location, type and severity of any technical quality issues found. SDVI Rally stores information about those issues as time-based metadata for each piece of media. When incoming media passes all of the requisite QC checks, the time-based metadata simply turns into a summary report and the file automatically continues on to the next step in the supply chain. However, when incoming content fails QC, the time-based metadata is used to flag the problematic portion of the content, alerting a QC team to its existence and allowing that team to quickly locate the offending portion, assess the nature of the problem, and perform the necessary fix.

Adding Machine Learning

To add ML functionality into this media supply chain, Amazon Transcribe and Amazon Rekognition services examine incoming content, and SDVI Rally processes the derived data in the same way as it had QC data. ML engines perform tasks such as object or location recognition, face detection, audio transcription, nudity or violence detection and more. This new information now becomes time-based metadata flagging problematic portions of content, used to assist the operator in the review process.

Maximizing the Benefits of Data

To derive maximum value from your media supply chain, data is key. The more data points captured, the more granular you can be in the construction of automated decision-making paths, the more automated the media supply chain. For example, the machine learning tool will find prohibited language, which triggers a “Transcribe” action to “Start Work Order”, which might bleep out the offending language or ask for redelivery of the file if production is still in process.  Automated collection, transcription and normalization of QC information info into usable time-based metadata allowed this media company to speed up content receipt and analysis.  If QC metadata could speed up the top of the supply chain, could data from the ML engines be similarly used to improve downstream processes like review, editing, packaging and delivery? Injecting time-based metadata into these processes could help teams responsible for those tasks prioritize, locate, identify, and complete their work more quickly.  Finished content would re-enter the automation process sooner, further optimizing the supply chain. This media company has reported that their QC and compliance process, driven by Rally and Rekognition, has reduced review-and-approval time from 2 hours per hour of content to under 10 mins per hour of content.

So how to make ML data accessible and usable for editing teams? The team’s use of Premiere Pro CC during their incoming media assessment process made it the obvious starting point. Adobe provides a convenient way to build custom panels that can be integrated into Premiere Pro CC and other Adobe applications. This provided the ideal framework for the creation of Rally Access™ Metadata-assisted Content Verification, the SDVI Rally panel for Adobe Premiere CC.

Managing Work Orders

Rally Access builds on the work order capabilities of the Rally platform, like those illustrated with this company’s particular QC process. While modern supply chains consist of mostly automated tasks, Rally can insert a manual step (such as a QC review) where appropriate and, after the manual task has been completed, the automated process picks up where it left off. The Rally workorder system organizes manual tasks into a list arranged according to need/urgency – by due date or priority programming. For example, establishing task types, assigning users or groups, and defining the data that should be included in a manual process. When the operator opens a task, Rally organizes all the metadata an asset has accumulated along the supply chain and makes it directly accessible to the operator inside a Premiere Pro session.

Rally Access

With the Rally Access panel installed in Premiere Pro, the first thing operators see is a task list. This critical feature makes more efficient use of people resources by allowing users to work in one tool – Premiere Pro CC – instead of switching between multiple systems and applications.

The task list filters information for particular users or group, providing only those details relevant to the job required of that particular operator.

Upon starting the task, the operator is presented with media in the player window (in a proxy format if desired) and all the previously collected time-based metadata, categorized and ready to review in the Rally Access panel.

Operators can pull up sets of events and quickly navigate between them in the player window, rapidly determining if they are genuine issues. This functionality allows the operator to skip through the content timeline, going directly to the areas of concern raised by the QC tools and ML analysis. Operators can filter events by type, locating specific areas of concern such as:

  • black frame errors
  • instances of profanity
  • areas where audio is out of tolerance
  • areas where actor X is in shot

Operators may dismiss error flags for events that don’t merit corrective action, or they may create new events for issues identified during their assessment. They choose the events to promote to the final report section of the panel, giving the operator a preview of their analysis. Once an operator completes the task, the changes are incorporated into the metadata record in Rally and can be accessed throughout the life of the file. The team is now doing their work all in one process assisted by data instead of multiple passes by different people on the same content.  Facial recognition allows easy identification of talent and compliance events including nudity, swearing or violence are automatically identified with time-based metadata.

Changing the Assessment Footprint

The shift to automated analysis for classic monitoring functions, along with the shift to cloud-based media workflows have had a dramatic impact on the physical environments in which assessments are conducted. Previously, performing assessments required a fully equipped technical evaluation suite, complete with tape decks, scopes, audio boards and other monitoring instruments. Construction of these was expensive, time-consuming, and thus had limited availability within a facility. By shifting the bulk of monitoring responsibility to automated tools, and by presenting media, metadata and work orders to operators in the Rally Access panel, the company was able to adopt open-plan QC areas. Each work station is built on a standard PC platform and outfitted with a monitor, hardware control panels, and programmable touch interfaces.

The Payoff

Technical evaluation operators have evolved the way they work. Previously, operators would review an entire program in real time, worrying they might miss a critical issue if they looked away for a split second. Now, operators rely on automated processes to catch what they may not be able to see. ML engines have allowed the team to move content tagging further up in the supply chain and significantly expand the types of content being tagged. Where in the past an ML engine may have only identified “car”, now tags will include the make, model and even the specific model year of a car.

The new division of labor between automated assessment tools, ML algorithms, and operators relies heavily on each piece of the process performing the tasks to which they are best suited. Automation detects errors that an operator may miss while, at the same time, ML efficiently identifies and catalogs multitudes of content. Operators spend their time where they have the most value: on assessment activities that automation has no hope of finding. The result of this new process: optimized QC operations which produce more valuable metadata and make teams more productive.  Before, one hour of content had 2 hours of human eyes, today 1 hour of content can be reduced to 10 minutes of human time.

Copyright © 2019 SDVI Corporation. Rally and Access are trademarks of SDVI Corporation. Adobe® and Premiere Pro CC® are registered trademarks of Adobe Inc. All other trademarks are the property of their respective owners.

 

Mark Stephens

Mark Stephens

Mark is a Partner Solutions Architect and Global Segment Lead for Media and Entertainment at AWS. His role is building a diverse ecosystem of partners on AWS for Media and Entertainment.