AWS DevOps Blog

Part 4: Develop, Deploy, and Manage for Scale with Elastic Beanstalk and CloudFormation

by Evan Brown | on | in Best practices, How-to | | Comments

Today’s Topic: Scaling Storage and Transcoding with Amazon S3 and Elastic Transcoder

Welcome to the 4th part of this 5-part series where we’ll cover best-practices and practical tips & tricks for developing, deploying, and managing a web application with an eye for application performance and operational efficiency using AWS CloudFormation and Elastic Beanstalk. This week in the fourth part of the series we’re going to focus on the media aspect of our application, looking specifically at how to scale large volumes of video uploads to our application as well as conversion of those videos into thumbnails and formats suitable for streaming.

All application source and accompanying CloudFormation templates are available on GitHub at

Last week (blog post and Office Hours video) we explored approaches to managing application configuration – including storing config in S3 – as well as best practices for writing Java code that works well in any AWS region. If this is the first post you’ve read in the series, be sure to check out Part 1Part 2, or Part 3 for more info on the app, including basic functionality and how to deploy it yourself.

We’ll be discussing this blog post – including your Q&A – during a live Office Hours Hangout at 9a Pacific on Thursday, May 1, 2014. Sign up at

Storing Videos in Amazon S3

aMediaManager allows customers to store their videos. S3 is the logical place to put video content, and video metadata (i.e., owner, tags, S3 URL, created date, etc) will be stored in RDS, making it easy to search and query. How we efficiently and scalably get these videos from a customer’s computer to S3 is what we’ll focus on here.

Here’s what the upload UI looks like:

A Typical Upload

It’s really easy to build a video upload form that uploads the video from the user’s browser back to our Java app running in Elastic Beanstalk, and then our Java app creates an AmazonS3Client and uploads the video to S3. Here’s what that HTML form might look like:

<form method="post" action="/video/upload">
  <input type="file" name="file" class="form-control" />

When the user clicks the Upload button, the browser POSTs the content back to the servlet at The request goes through your environment’s ELB, then to an EC2 Instance, and finally to S3. The app server also writes video metadata to RDS. Here’s an illustration:

Shortcomings of the Typical Upload

Although handling file uploads in the traditional fashion is easy and straighforward, there are a few downsides:

  1. Cost: At the time of publishing, ELB charges $0.008 per GB of data processed, so there will be a data transfer cost (in addition to storage cost) associated with every video uploaded by your customers.
  2. Performance: Also consider that every video upload is another TCP connection your EC2 Instances have to handle. In the case of many uploads and/or long-running uploads, this will require you to scale your EC2 capacity to keep up.

I wouldn’t point out these shortcomings without a solution! Let’s get to it…

Offload Video Uploads Directly to S3

S3 is the right place to store videos, but you can create an HTML form that will POST files and forms directly to S3 (documented here), bypassing our environment’s ELB and EC2 instances entirely. After the video upload is complete, S3 will redirect the customer’s browser back to your application and you’ll have the opportunity to store metadata about the uploaded video into RDS.

Here’s a diagram illustrating the steps in the process. We’ll go into detail on each step below the diagram:

  1. POST to S3: When a user visits the /video/upload route in our application (defined in com.amediamanager.controller.VideoController), the controller uses com.amediamanager.util.VideoUploadFormSigner to generate and render a signed HTML form to the user’s browser. This form defines important things like the name of the video once it’s uploaded, its maximum size, etc, and uses a secret key to sign and protect the form:

    <form role="form" method="post" enctype="multipart/form-data" action="">
        <!-- The key (name) the video file will have once in S3 -->
        <input type="hidden" name="key" value="uploads/original/" />
        <!-- Access Key -->
        <input type="hidden" name="AWSAccessKeyId" value="ASIAI5WSQIKOSMLA27EA" />
        <!-- Where S3 will redirect the user after the video upload completes -->
        <input type="hidden" name="success_action_redirect" value="" />
        <!-- The base64-encoded policy that defines contraints (i.e., max size, name, etc) of upload -->
        <input type="hidden" name="policy" value="eyAiZXhwaXJhdGlvbiI6ICIyMDE0LTA0LTI3VDIzO...=" />
        <!-- The policy signed with the secret key (prevents tampering) -->
        <input type="hidden" name="signature" value="UGo7L7BN37qwZhqzhV0qJ9iZUnc=" />
        <!-- Target bucket -->
        <input type="hidden" name="x-amz-meta-bucket" value="amediamanager-appresources-1h7ffhqohdvo-appbucket-brs3akmmnp0s" />
        <!-- Some metadata about the video to be uploaded -->
        <input type="hidden" name="x-amz-meta-owner" value="" />
        <input type="hidden" name="x-amz-meta-uuid" value="ee4a1404-a810-4f52-91b4-de48833c5d7b" />
        <!-- User input fields -->
        <input type="text" class="form-control" name="x-amz-meta-title" id="title" />
        <input type="text" name="x-amz-meta-tags" id="tags" />
        <input type="file" name="file" class="form-control" />

    When the user completes the ‘Upload Video’ form in their browser and clicks Upload, the form POSTs directly to S3. The file is stored as an object in S3, and other important data in the form (i.e., any form field with the x-amz-meta- prefix) is stored as metadata attached to the video object. In this example, this includes hidden fields like the owner, as well as fields the customer filled out, including tags, date, and description data.

  2. Redirect After Upload Complete: After the video from the POST has been received and stored, S3 will look for a special hidden form input called success_action_redirect in the POST. If present, S3 will issue an HTTP 302 redirect to the user’s browser, instrucing them to go that URL next. When our application rendered the upload form in Step 1 above, it included a success_action_redirect that will redirect the user to /video/ingest upon complection of the upload to S3:

    <form role="form" method="post" enctype="multipart/form-data" action="">
        <!-- Where S3 will redirect the user after the video upload completes -->
        <input type="hidden" name="success_action_redirect" value="" />

    In the redirect, S3 will append the name of the bucket and object that were just uploaded, for example:
  3. Ingesting Video Metadata: Users are automatically redirected by S3 to the /video/ingest route after an upload. We’ll write code here to retrieve the bucket and key that they just uploaded a video to, then use the S3 API to get the metadata for that object and store it in RDS. Here’s the code for the route handler ( com.amediamanager.controller.VideoController):

    @RequestMapping(value = "/video/ingest", method = RequestMethod.GET)
    public String videoIngest(ModelMap model, @RequestParam(value = "bucket") String bucket, @RequestParam(value = "key") String videoKey) throws ParseException {
        // Save the video
        Video video =, videoKey);
        // Kick off preview encoding
        return "redirect:/";

    In com.amediamanager.service.VideoServiceImpl you can see how the save method calls the getObjectMetadata S3 API againt the object that was just uploaded, parses the result (which includes the tag, data, and description text the customer provided in the form) and saves the metadata in RDS:

    public Video save(String bucket, String videoKey) throws ParseException {
        // From bucket and key, get metadata from video that was just uploaded
        GetObjectMetadataRequest metadataReq = 
          new GetObjectMetadataRequest(bucket, videoKey);
        ObjectMetadata metadata = s3Client.getObjectMetadata(metadataReq);
        Map<String, String> userMetadata = metadata.getUserMetadata();
        Video video = new Video();
        video.setUploadedDate(new Date());
      // Save to RDS
        return video;

When I visit the home page of my app after uploading a video, I see the video with a generic “Video conversion in progress” thumbnail:

Now let’s talk about how we transcode the video.

Transcoding Videos

After an uploaded video has been stored in S3 and its metadata recorded in RDS, the /video/ingest route in com.amediamanager.controller.VideoController schedules a transcoding job with Amazon Elastic Transcoder that will convert the video into a format suitable for streaming and generate thumbnail images. But before we can schedule a transcodig job, our application had to configure a Pipeline and Preset in Elastic Transcoder.

Setting up the Pipeline and Preset

We used CloudFormation to provision almost every dependency (e.g., RDS, DynamoDB, etc) our application has, as of the creation of this application CloudFormation does not support creating and managing Elastic Transcoder resources. No problem, though. We’ll build a simple admin/config page and use the AWS SDK for Java to configure the Pipeline and Preset. And do note that Elastic Transcoder is in the AWS Management Console; you could create the Pipeline and Preset resources using the UI, but we – of course! – want to automate all of this.

Here’s what that config page looks like (at the /config route in our environment) with the Elastic Transcoder creation piece highlighted:

Creating the Pipeline

From the Elastic Transcoder documentation, pipelines are “queues that manage your transcoding jobs. When you create a job, you specify the pipeline to which you want to add the job. Elastic Transcoder starts processing the jobs in a pipeline in the order in which you added them.”

Clicking the Create button in the /config page will invoke com.amediamanager.config.ElasticTranscoderPipelineResource and use the Elastic Transcoder API to create our pipeline:

private String provisionPipeline() {
    String pipelineId = config.getProperty(ConfigProps.TRANSCODE_PIPELINE);

    if (pipelineId == null) {"Provisioning ETS Pipeline.");
        state = ProvisionState.PROVISIONING;
        Notifications notifications = new Notifications()

        CreatePipelineRequest pipelineRequest = new CreatePipelineRequest()

        try {
            CreatePipelineResult pipelineResult =
            pipelineId = pipelineResult.getPipeline().getId();

              persistNewProperty(ConfigProps.TRANSCODE_PIPELINE, pipelineId);
        } catch (AmazonServiceException e) {
            state = ProvisionState.UNPROVISIONED;
    return pipelineId;

The Notifications object we created above associates a Pipeline with an Amazon SNS Topic. When the status of a job submitted to a Pipeline changes (e.g., a transcode completes), Elastic Transcoder will publish a message to this SNS topic. We’ll talk more about that workflow in a bit.

I can see the Pipeline created by this code in the Elastic Transcoder Management Console:

Creating the Preset

From the Elastic Transcoder documentation, a Preset is “a template that contains the settings that you want Elastic Transcoder to apply during the transcoding process, for example, the number of audio channels and the video resolution that you want in the transcoded file. When you create a job, you specify which preset you want to use.”

We create the preset programmatically in com.amediamanager.config.ElasticTranscoderPipelineResource:

private String provisionPreset() {
    String presetId = config.getProperty(ConfigProps.TRANSCODE_PRESET);

    if (presetId == null) {"Provisioning ETS Preset.");
        state = ProvisionState.PROVISIONING;
        Map<String, String> codecOptions = new HashMap<String, String>();
        codecOptions.put("Profile", "main");
        codecOptions.put("Level", "3.1");
        codecOptions.put("MaxReferenceFrames", "3");

        VideoParameters video = new VideoParameters()

        AudioParameters audio = new AudioParameters()

        Thumbnails thumbnails = new Thumbnails()
        try {
            CreatePresetResult result = transcoderClient.createPreset(presetRequest);
            presetId = result.getPreset().getId();
              persistNewProperty(ConfigProps.TRANSCODE_PRESET, presetId);
        } catch (AmazonServiceException e) {
            state = ProvisionState.UNPROVISIONED;
    return presetId;

Among other things, this preset defines the output format of a transcoded video, as well as how to generate thumbnail preview images for a video.

I can see the Preset created by this code in the Elastic Transcoder Management Console:

Persisting the Pipeline and Preset

The ID of the Pipeline and Preset are important configuration values that every application server needs to know. Recall from Part 3 of this series where discussed how we store our application configuration file in S3.

The com.amediamanager.config.ConfigurationProvider abstract class we defined to help manage configuration defines a method to implement that allows new configuration to persist:

public abstract void persistNewProperty(String key, String value);

Our com.amediamanager.config.S3ConfigurationProvider class implements that method by persisting new config data to the config file in S3.

When we create the Pipeline and Preset in the code samples above, we’re always sure to persist their values. Here we persist the Pipeline resource:

config.getConfigurationProvider().persistNewProperty(ConfigProps.TRANSCODE_PIPELINE, pipelineId);

And the Preset:

config.getConfigurationProvider().persistNewProperty(ConfigProps.TRANSCODE_PRESET, presetId);

Starting a Job

After a user uploads a video to S3 and is redirected to /video/ingest, we schedule a transcode job with Elastic Transcoder. We do that in the videoService.createVideoPreview method of com.amediamanager.service.VideoServiceImpl:

public void createVideoPreview(Video video) {
    CreateJobRequest encodeJob = new CreateJobRequest()
              new JobInput().withKey(video.getOriginalKey())
            .withOutputKeyPrefix("uploads/converted/" + video.getOwner() + "/")
              new CreateJobOutput()
    try {
      CreateJobResult result = transcoderClient.createJob(encodeJob);
      // Associate the job ID with the video
      // Update the video's thumbnail to indicate its conversion is in progress
    } catch (AmazonServiceException e) {

We can use the Elastic Transcoder Management Console to track jobs in our pipeline:

Search for the job:

Choose the search result:

View job details:

Polling SQS for Job Status

After you’ve started a job with Elastic Transcoder, the service reports job status to the SNS Topic you defined when you created the Pipeline. CloudFormation created this SNS Topic when we deployed our initial template, and also created and subscribed an SQS Queue to that topic. So, when Elastic Transcoder publishes status messages to SNS, they ultimately get buffered to the Queue.

Here’s what that looks like:

In com.amediamanager.scheduled.ElasticTranscoderTasks we poll SQS every 20 seconds looking for status messages from Elastic Transcoder:

protected void checkStatus() {
  String sqsQueue = config.getProperty(ConfigProps.TRANSCODE_QUEUE);
  ReceiveMessageRequest request = new ReceiveMessageRequest(sqsQueue)

  ReceiveMessageResult result = sqsClient.receiveMessage(request);

  for (Message msg : result.getMessages()) {

When a video transcode job is complete, we update the video’s metadata in RDS with the new thumbnail and the link to the streamable video. We can see the result in our app’s landing page:

Coming Up: Part 5

First, don’t forget to join us for the live Office Hours Hangout later this week (or view the recording if it’s past May 1 2014 and you don’t have a time machine).

Next week in Part 5 of this series (blog post and Office Hours links forthcoming at we’ll look at how to improve application performance with the use of RDS Read Replicas and ElastiCache clusters.