AWS for M&E Blog

Low code workflows with AWS Elemental MediaConvert

AWS Elemental MediaConvert provides broadcast-grade video and audio file transcoding that customers can automate with code to suit their media workflows. With optimized integration for AWS Step Functions in MediaConvert, it is now possible to orchestrate using the low-code visual tool Workflow Studio. This makes building and customizing media workflows accessible even with limited coding experience. This blog post outlines how to take advantage of the Step Functions integration, available at no additional cost, to automate a multi-step workflow that detects and removes SMPTE color bars from video content using MediaConvert.

Media workflows

Media supply chains encompass a range of workflow requirements, reflecting the complexity and diversity of file formats and integrations for content production, broadcast, and streaming. For example, publishing a video clip may require separate processes for ingestion, analysis, editing, transcoding, quality checks, and streaming.  Building video pipelines previously relied on custom function code in Lambda. With the optimized integration, video operators and solution architects can create and manage the business logic for rules-based pipelines all within Step Functions.

With AWS Step Functions, you can build distributed applications, automate IT and business processes, and orchestrate using Amazon Web Services (AWS) with minimal code. Media workflows typically have external dependencies or trigger post processing with a range of AWS Services. For example, uploading a media file to Amazon Simple Storage Service (Amazon S3) could initiate a MediaConvert job, which on completion updates Amazon DynamoDB with metadata and sends the data to generative artificial intelligence applications for analysis. With Step Functions, the characteristics of the content can flow through user-specified rules to determine how to handle subsequent processing steps.

Getting started

It is now possible to drag and drop the action for MediaConvert CreateJob to build a Step Function state machine, as depicted in the following motion graphic. For an introduction to get started with Workflow Studio, visit this tutorial. Note that you can toggle between Design mode with a graphical interface and Code mode to edit your workflow definitions using Amazon States Language (ASL).

In the Configuration tab, the MediaConvert API Parameters required to CreateJob can be entered, which are outlined in an example use case.

The optimized integration for Step Functions provides the Run a Job (.sync) integration pattern. With this pattern, your state machine execution will pause until the transcoding job is complete. This option eliminates the need to create callbacks or wait timers.

MediaConvert can be orchestrated in Workflow Studio to work with any of 200+ AWS services using AWS SDK integrations or third-party APIs directly from a state machine to customize based on your unique business requirements. Each state in the Step Function will pass JSON, which can be manipulated to extract parameters required for each state, see Using JSONPath effectively in AWS Step Functions to learn more.

Example use case

Let’s take a simple use case for pre-processing video prior to transcoding to illustrate how to create a MediaConvert processing workflow using Step Functions Workflow Studio. In this example, a content provider delivers video files to Amazon S3, and the beginning of each video file has SMPTE color bars of unknown length that need to be automatically removed. The example clip timeline in the following graphic depicts the first few seconds as color bars, followed by the remainder of a video clip to transcode with MediaConvert.

In order to determine the appropriate clipping points in the source content, you can use  Amazon Rekognition for video analysis with machine learning via the Segments API. Video segment detection identifies technical cues in content such as black frames, color bars, opening credits and more. You can use asynchronous StartSegmentDetection and GetSegmentDetection API operations to start a segmentation job and fetch the results. Segment detection accepts videos stored in an Amazon S3 bucket and returns a JSON output.

This workflow, built using the drag-and-drop approach in Workflow studio, is depicted in the following graphic. Reference ASL code is available at ServerLess Land.

Step Function state machine showing steps as 1) Start 2) StartSegmentDetection with Rekognition 3) Wait 4) Rekognition GetSegmentDetection 5) Choice for loop until step 4 is complete 5) MediaConvert CreateJob

Let’s walk through the detailed steps to create this using Workflow Studio.

1.Create a State Machine in Workflow Studio

  1. Open the Step Functions console and choose Create state machine.
  2. In the Choose a template dialog box, select Blank.
  3. Choose Select. This opens Workflow Studio in Design mode.
  4. From the States browser on the left, choose the Actions

This new State Machine will also need an IAM role with appropriate permission and Workflow Studio can automatically generate roles for some use cases. For more details on custom roles see Creating an IAM role for your state machine and note that this example will need appropriate permissions for MediaConvert, S3 buckets used, Rekognition and CloudWatch events.

Additionally, MediaConvert will need its own role for reading and writing to your S3 bucket. To create this role follow the Creating MediaConvert IAM Role instructions.

2. Start Segment Detection

  1. Search in Actions for Rekognition StartSegmentDetection 
  2. Drag and drop this action to connect with StartThis new workflow state has now been added to your workflow, and its code is auto-generated
  3. Click on the StartSegmentDetection action on the Workflow Canvas and the Configuration pane should appear on the right side
  4. Copy the example API Parameters below, which can be configured as required per the API reference
  "Filters": {
    "ShotFilter": {
      "MinSegmentConfidence": 95
  "SegmentTypes": [
  "Video": {
    "S3Object": {
      "Bucket.$": "$.Input.Bucket",
      "Name.$": "$.Input.Key"
  1. Go the Output tab in the right side add check the box “Add original input to output using ResultPath”. Make sure that in the drop-down you chose the option Combine original input with resultand add $.SegmentJob to the input field. It should look like this:

3. Wait

The previous step will take some time to complete, so add a Wait state for 60 seconds before calling GetSegmentDetection state. This Wait state will be part of a loop to address longer videos. See the next instructions for more details on how to implement the loop.

  1. Search in Flow for Wait
  2. Enter 60 seconds nominally

4. Segment Detection

Next you have to get the results of the Video analysis in prior step, using the JobID created by StartSegmentDetection.

  1. Search in Actions for Rekognition GetSegmentDetection
  2. Drag and drop this action to follow `Wait` step
  3. Click on the GetSegmentDetectionaction on the Workflow Canvas and copy the example API Parameters below, which can be configured as required per the API reference
"JobId.$": "$.SegmentJob.JobId"
  1. Go the Output tab in the right side add check the box “Add original input to output using ResultPath”. Make sure that in the drop-down you chose the option Combine original input with resultand add $.Segments to the input field. It should look like this:

5. Implement loop

Next you should implement a simple loop in order to make sure the Segment Detection Job finished before going to the next step. The time taken for the Segment Detection Job will vary depending on the length of the video, so you want to make sure your workflow can handle different videos lengths.

  1. Search on Flow for Choice
  2. Drag and drop this action to follow `GetSegmentDetection` step
  3. Add two rules to Choice.
    • First rule should look for the “SUCCEEDED” status of the previous GetSegmentDetection, and set the MediaConvert CreateJob as next step. You may need to drag and drop the MediaConvert CreateJob task before creating the rule.
    • Second and default rule should only set “Wait” step as next step. This will ensure the workflow iterates until the segment detection job finishes.

The Choice state ASL should look like this:

    "Choice": {
      "Type": "Choice",
      "Choices": [
          "Variable": "$.Segments.JobStatus",
          "StringEquals": "SUCCEEDED",
          "Next": "MediaConvert CreateJob"
      "Default": "Wait"

An important note here is that you can implement this logic differently with more robust techniques, but for this case a simple loop is enough. In a production environment, you should also handle job failure.

6. MediaConvert CreateJob

The JSON output of GetSegmentDetection will include an array of Segments with type “TECHNICAL_CUE” to identify color bars. So we can now use this with the MediaConvert input clips feature, to only transcode the relevant content timecode without color bars.

For this example we will take the first Segment only and use the parameter EndTimecodeSMPTE from the previous step as the timecode for InputClippings. Using JSONPath expression this can be filtered as follows:

        "InputClippings": [

Next you can drag and drop the action for MediaConvert

      1. Search in Actions for MediaConvert CreateJob 
      2. Drag and drop this action to follow `Choice` step
      3. Copy the example API Parameters below, which can be configured as required per the MediaConvert API Parameters.

The job specification has a lot of configuration options, however the easiest approach is to use the MediaConvert console to set up and run your initial job. Then export the JSON job object per Creating Your AWS Elemental MediaConvert Job Specification. In this basic example you will need to update the REGION, AWS_ACCOUNT_ID and ensure appropriate service role.

  "Queue": "arn:aws:mediaconvert:REGION:AWS_ACCOUNT_ID:queues/Default",
  "UserMetadata": {},
  "Role": "arn:aws:iam::AWS_ACCOUNT_ID:role/service-role/MediaConvert_Default_Role",
  "Settings": {
    "TimecodeConfig": {
      "Source": "ZEROBASED"
    "OutputGroups": [
        "Name": "Apple HLS",
        "Outputs": [
            "Preset": "System-Ott_Hls_Ts_Avc_Aac_16x9_1280x720p_30Hz_5.0Mbps",
            "NameModifier": "stream"
        "OutputGroupSettings": {
          "Type": "HLS_GROUP_SETTINGS",
          "HlsGroupSettings": {
            "SegmentLength": 10,
            "Destination.$": "States.Format('s3://{}/{}', $.Output.Bucket, $.Output.Key)",
            "MinSegmentLength": 0
    "FollowSource": 1,
    "Inputs": [
        "InputClippings": [
            "StartTimecode.$": "$.Segments.Segments[0].EndTimecodeSMPTE"
        "AudioSelectors": {
          "Audio Selector 1": {
            "DefaultSelection": "DEFAULT"
        "VideoSelector": {},
        "TimecodeSource": "ZEROBASED",
        "FileInput.$": "States.Format('s3://{}/{}', $.Input.Bucket, $.Input.Key)"
  "BillingTagsSource": "JOB",
  "AccelerationSettings": {
    "Mode": "DISABLED"
  "StatusUpdateInterval": "SECONDS_60",
  "Priority": 0

Execute Workflow

To execute the Step Function workflow you will need to pass the parameters as JSON for input and output location as follows.

  "Input": {
    "Bucket": "INPUT_BUCKET_NAME",
  "Output": {
    "Bucket": "OUTPUT_BUCKET_NAME",
    "Key": "OUTPUT_PREFIX/"

Once you start the workflow execution,  a graph view visually shows the state progressing in the Execution Details console page. Each step displays as green if it has successfully completed, or red if there was an issue executing that step. In a typical workflow, uploading a video file to Amazon S3 as a new object can trigger an event, per Starting a State Machine Execution in Response to Amazon S3 Events.

Graph of example use case Step Function state machine having completed successfully each step is green.

Next, go to the output Amazon S3 bucket and the transcoded video will have removed the color bars and start on program content. Congratulations, you have now built a low-code workflow with Step Functions and MediaConvert!


Building video file transcoding workflows previously relied on custom code to orchestrate. With an optimized integration for AWS Elemental MediaConvert, video operators and solution architects can create and manage rules-based workflows within Step Functions. This low-code approach removes the need for custom application code, so you can build solutions faster with less operational effort to maintain.

To learn more, please visit the documentation at optimized integration for MediaConvert.

Damian McNamara

Damian McNamara

Damian McNamara is a Senior Specialist Solution Architect for AWS Edge Services, with two decades experience in Broadcast and Digital Media.

Aryam Gutierrez

Aryam Gutierrez

Aryam Gutierrez is a Senior Partner Solutions Architect at AWS who specializes in Serverless technologies. He supports strategic partners to either build highly-scalable solutions or navigate through the various partner programs to differentiate their business, with the ultimate goal of growing business with AWS.