The Synchronized platform transforms linear and passive videos into smart videos. Its artificial intelligence understands the content of a video and enriches it with metadata.
These metadata are subject to editorial guidelines and converted into components that can be immediately handled and used by TF1 editorial and marketing teams.
Once enriched, these videos are as interactive as a hypertext document, with search, indexing, browsing and web capabilities.
Synchronized’s customers use the platform to automate entire processes for the transformation of linear TV content into digital programmes, in order to better reach their consumers by customising and enhancing their experience.
The startup has rolled out its solution for MYTF1, the on-demand video platform of TF1, France’s leading free-to-air TV channel.
When IA supports editorial content
It was essential for TF1 to automate its video thumbnail creation process as part of the redesign of MYTF1’s architecture and user experience. Having to handle higher value-added tasks, the editorial team was unable to manage the expanding catalogue on its own.
According to Thomas Bidet, Product Owner at MYTF1:
“In addition to the challenge of automation, we had to ensure the creation of high-quality thumbnails for each audiovisual programme: their role is as important as product images on e-commerce sites, since the click-through rate has an impact on the business. We also had to ensure overall cohesion and full control of the editorial line, as well as the same seamless user experience that the digital majors manage to achieve even with very extensive catalogues.”
“The editorial team at TF1 used to manually process hundreds of hours of content every month. The Synchronized platform’s Smart-Thumbnails service has automated all the tasks involved in the time-consuming process of creating and selecting video thumbnails”, explains Guillaume Doret, CEO of Synchronized.
The challenge involves:
- Automatically generating thumbnails for each programme on MYTF1
- Guaranteeing the quality of the thumbnails based on specific editorial criteria
- Being able to focus on the core business
All the video images are automatically analysed and processed by the Synchronized platform, using AWS Rekognition to generate a very substantial mass of data, such as recognition of faces, emotions, objects and text.
This mass of data is subjected to the editorial guidelines pre-defined in the platform. All the images corresponding to the guidelines are selected as potential images and our algorithm chooses the best one.
TF1 editors can access all the suggestions in Synchronized Studio in just three clicks, and choose other images if they so wish.
“The image is retrieved and sent to the TF1 content management system (CMS) which will automatically publish the thumbnail on MYTF1”, says Guillaume Doret.
The example of TV movies: “According to the editorial guideline, the main actor of the episode should be framed on the right, with their eyes open and looking to the left.”
“We initially built our architecture at AWS and outsourced other services, but soon realised that it was more efficient to use all their technologies”, explains Guillaume Doret, founder and CEO of Synchronized. “Working in a complete ecosystem saves time and money, from each development stage to maintenance.” He goes on: “We started out working on our own. AWS provides lots of resources and documentation; the whole process is clear and efficient. We can test prototypes and roll them out quickly.”
Guillaume Doret explains: “AWS technologies help us to focus on creating value by transforming AI-generated video data into components that our creative, editorial and marketing teams can use directly without needing any technical knowledge. AWS provides developers with very powerful, often almost state-of-the-art technological building blocks that allow us to put all our efforts into developing our video process and strengthening our unique position on the market.”
Synchronized believes that AWS is the ideal platform for startups seeking to prototype very quickly and build a sustainable architecture capable of providing industrial solutions to major challenges such as those of TF1. “We receive highly efficient support which gives our teams an enormous sense of security, enabling us to quickly confirm the choices we make or make appropriate improvements”, adds Guillaume Doret. “It gives us confidence in the face of new challenges, so that the transition from the proof of concept to the industrial and scalable product is both natural and seamless.”
Security is a critical matter for both the startup and TF1. “By using AWS for both our platform and our architecture, we were able to easily and quickly address the issue. If we had done otherwise, it could have slowed down our development.”
The platform provides various services for automating the entire chain:
- Content segmentation, where a video is trimmed according to the type of programme. This is a machine-learning function, and Synchronized has filed a patent for the technology.
- Smart Thumbnail, the automatic creation of thumbnails, which is entirely based on AWS Rekognition. “We don’t even need to train the AI. We provide a data set. The smart part is to prepare it for the application of editorial guidelines.” This AI part is entirely delegated to AWS.
- Smart-Tagging, which uses tags and labels to categorise and identify content, such as public figures in a video.
In addition to automating workflows to save time and money and improve quality, Synchronized also generates new data that were manually impossible to create before, offering new opportunities for use.
More than 500 hours of TF1 programmes are processed every month. The data alone represent a total of 10TB: “Thanks to AWS, we can significantly increase the amount of data generated for a video, which means that we can cover more and more use cases. This is exactly what we are working on with TF1”, adds Guillaume Doret.
Building on the relationship with AWS, Synchronized has access to resources that can help it to optimise its architecture and prepare future developments. The startup tested and very quickly started to use new services offered by AWS Rekognition, such as the Video Segment Detection module, a new AWS Reko service which, amongst other things, automatically identifies segments or chapters and credits in a video.
“The beauty of it all is that a startup can embark on highly ambitious projects while building on and collaborating with companies like AWS, which provide access to the results of their work and research. That was impossible 15 years ago, when everyone had to keep reinventing the wheel. Today, these synergies are unleashing innovation”, concludes Guillaume Doret.
- 10TB of data generated for thumbnails
- 500 hours of videos analysed every month
- Hundreds of hours of editors’ time saved every month