What does this AWS Solutions Implementation do?
Live Streaming with Automated Multi-Language Subtitling automatically generates multi-language subtitles for live video streaming in real time. This media solution is easy to deploy, supports adaptive bitrate streaming and is used only during the live event. When you finish streaming, you can delete the solution’s stack to help ensure that you only pay for the infrastructure you use.
The solution uses Live Streaming on AWS to encode and package your content for adaptive bitrate streaming across multiple screens, and AWS Lambda, Amazon Transcribe, and Amazon Translate to convert the audio to text and generate captions in multiple languages.
This solution is designed to be a framework for real-time subtitling, allowing you to focus on extending the solution's functionality rather than managing the underlying infrastructure operations. You can use this solution out-of-the-box, customize the solution to meet your specific use case, or work with AWS Partner Network (APN) partners to implement an end-to-end subtitling workflow.
AWS Solutions Implementation overview
AWS provides a real-time subtitling solution for live streaming video content that combines Amazon Transcribe, Amazon Translate, and AWS Lambda to build a serverless architecture that automatically generates multi-language subtitles for your live streaming videos. The diagram below presents the architecture you can automatically deploy using the solution's implementation guide and accompanying AWS CloudFormation template.
Live Streaming with Automated Multi-Language Subtitling architecture
The solution’s AWS CloudFormation template deploys Live Streaming on AWS, which includes AWS Elemental MediaLive, MediaPackage, and Amazon CloudFront; Amazon Simple Storage Service (Amazon S3) buckets; Amazon Transcribe; Amazon Translate; and two AWS Lambda functions: one that converts audio to text and one that generates the WebVTT subtitles that are sent to MediaPackage.
The subtitle generation process starts when MediaLive output is sent to the solution’s Amazon S3 bucket. The CaptionCreation Lambda function takes the manifest files from the bucket, extracts unsigned pulse-code module (PCM) audio from the TS video segments, and saves the PCM audio to Amazon S3. Then, the function invokes the TranscribeStreaming function and gives it the PCM audio.
The TranscribeStreaming function uses Amazon Transcribe streaming transcription to convert the audio stream to text in real time. The function then sends the transcript back to the CaptionCreation function. If multiple languages are required, the CaptionCreation function calls Amazon Translate to translate the transcript.
The CaptionCreation function creates the WebVTT subtitle files and the manifests and sends those and the video files to MediaPackage.
MediaPackage ingests the files and packages them into formats that are delivered to four MediaPackage custom endpoints.
An Amazon CloudFront distribution is configured to use the MediaPackage custom endpoints as its origin. The CloudFront distribution delivers your live stream to viewers with low latency and high transfer speeds.
Note: To subscribe to RSS updates, you must have an RSS plug-in enabled for the browser you are using.
Browse our library of AWS Solutions Implementations to get answers to common architectural problems.
Find AWS certified consulting and technology partners to help you get started.
Browse our portfolio of Consulting Offers to get AWS-vetted help with solution deployment.