AWS Media Blog
Securing production workflows in AWS: Aligning to the MovieLabs Common Security Architecture for Production (CSAP)
In the previous blog in the series, Building a strong identity foundation: Aligning to the MovieLabs Common Security Architecture for Production (CSAP), we discussed the importance of building a strong identity foundation in Amazon Web Services (AWS). We did this by demonstrating how AWS services map to core components in the MovieLabs CSAP through a practical use case in production workflows: dailies editing. In this blog, we expand on the dailies editing use case to include asset ingest and asset exchange to vendors for workflows like Visual Effects (VFX), color correction, and final mastering.
The first principle in the MovieLabs 2030 vision is all assets are created or ingested straight into the cloud and do not need to be moved. To understand the impact of this principle, we discuss how asset ingestion works today for Original Camera Files (OCFs) from set to on-premises storage. During principal photography, camera cards are pulled from the camera and moved to the Digital Imaging Technician (DIT) cart where data is copied from the cards to physical hard drives. Once the data is copied, the DIT typically backs up the data twice, verifies the data copied successfully, and then clears the camera card for reuse. The physical hard drives and backup copies are brought to a post-production facility once shooting has concluded for the day where the data is loaded into a central storage system. The post-production facility transcodes the OCFs to create lower-resolution proxy files for dailies editing the next day. Sometimes transcoding is done on the DIT cart itself depending on the available software.
While this process has worked for many years, it is manual and can result in production delays if drives get lost or damaged. In addition, dailies workflows are blocked until the media is delivered to the post-production facility, transcoded, and made available to editorial. AWS provides durable cloud object storage using Amazon Simple Storage Service (Amazon S3) with options to replicate data to a geographically distant region as a second copy and to an archival tier for a third copy. As we discussed in the previous blog, Amazon S3 allows customers to protect their data with robust access controls and unmatched security capabilities. There are many ways to get data into Amazon S3 including AWS-native services like AWS Direct Connect and AWS DataSync or AWS Partner solutions. The purpose of AWS Direct Connect is to facilitate network connections between your data centers and AWS without having to traverse the public internet. AWS DataSync is a service that allows you to discover and migrate data securely to AWS and we explore how you can use it to transport assets from set to AWS. Regardless of the solution you select, there are common best practices to ensure your data is secured as it’s moved into AWS.
The following architecture demonstrates how you can ingest on-set assets to AWS using AWS Direct Connect and AWS DataSync. To transfer data, you define a task that is tied to a specific set of agents, a source storage system, and a destination storage system.
Figure 1: Asset ingest workflow using AWS DataSync and AWS Direct Connect
Note: This example assumes principal photography is conducted at a location with network connectivity to Direct Connect
This type of transport requires a local DataSync agent (step 1) running on the DIT cart or an on-set server that interfaces with the on-set storage system. The agent facilitates data transfer, using TLS for secure transport, from set to AWS by leveraging a private, low-latency, and dedicated connection powered by AWS Direct Connect (step 2). To ensure traffic remains private between the on-site network and your Amazon Virtual Private Cloud (VPC), we recommended using a VPC Endpoint (step 3) to enable private communication between the on-set agent and the DataSync service in the cloud. For more information on configuring secure transport from on-premises to AWS, refer to this blog post.
To verify the integrity of the transfers, AWS DataSync validation checks can be configured to calculate and compare checksums (step 4). AWS DataSync calculates the checksum of transferred files and metadata on the source system and then compares this to the calculated checksum on the files at the destination location during the transfer or when the transfer is complete. For guidance on which data validation setting to select, refer to the documentation.
Once the assets have been transferred to AWS, you can use Amazon S3 cross-region replication to automatically copy assets to a separate region for disaster recovery (step 5). To ensure that data stays encrypted wherever it lands, you should enable bucket encryption on the destination region bucket and S3 will handle re-encrypting the data as it moves between regions. To optimize costs, you can copy your assets to a colder storage tier in Amazon S3 such as Amazon S3 Glacier Instant or Flexible retrieval or Amazon S3 Glacier Deep Archive. In addition to creating copies of the assets, you can enable event notifications on the primary Amazon S3 bucket that generate events when dailies proxy files are added to the bucket to automate additional processing like transcoding using AWS Elemental MediaConvert. For data stored in S3, MediaConvert can be configured to ensure that output assets it produces are encrypted by S3. For downstream use cases like asset distribution, MediaConvert can also be configured to encrypt output assets using Digital Rights Management (DRM) solutions. In this case, MediaConvert uses keys provided by the DRM solution to encrypt the transcoded assets. For more information on MediaConvert and DRM, refer to the documentation. This ensures that only authorized viewers can consume content generated by MediaConvert. In the context of CSAP, this falls under the concept of explicit encryption where “assets are encrypted individually or as a group such that the encryption is independent of how the assets are held.” When the asset is created and encrypted, using a DRM solution, it can only be accessed or decrypted by an authorized user that holds a license for the content.
In the previous blog, we discussed the importance of layered security policies to build a secure data perimeter around your media assets. The same concepts are applicable in this extended asset ingest workflow. Policies applied to the Amazon S3 bucket, AWS Key Management Service (KMS) encryption keys, VPC endpoints, and IAM roles work together to secure access to your media and enforce access conditions like requiring TLS and requests to originate from VPC endpoints.
It’s common for multiple companies or studios to be involved in the post-production phase. Typically, many companies are involved across editorial, VFX, color correction/finishing, and quality control. To enable this collaboration, media needs to be accessible to vendors who might be geographically dispersed. Anytime media is being accessed by a party outside of the studio that owns the content, extra scrutiny is placed on the assets and the workflow that the vendor uses to retrieve the assets they need. Similar to asset ingest, there are many ways to share or exchange assets with vendors including AWS-native services like AWS DataSync or cross-account roles, AWS solutions including Media Exchange on AWS, and AWS Partner solutions. Regardless of the solution, there are common best practices to ensure your data is secured and only accessible by the individuals who need it.
The following architecture demonstrates how to exchange media assets with another vendor using AWS DataSync. With DataSync, customers can transfer to and from different storage systems including AWS storage services and object storage systems outside of AWS. For a full list of supported locations, refer to the documentation.
Figure 2: Asset exchange workflow using AWS DataSync
In this example, we use a shared S3 bucket as a temporary staging area for media assets to be consumed by vendors. This approach provides logical isolation from resources in the main account and the vendor account. It also reduces the impact of security events that may occur if the shared account is compromised. Only the assets pushed via a DataSync task will be accessible to the vendor using the shared bucket. Amazon S3 bucket policies, IAM roles, and KMS key policies enforce the data perimeter as we’ve discussed in previous blogs.
In this blog, we built upon the dailies editing use case from the previous blog in the series and highlighted common best practices for securing asset ingest and asset sharing workflows in AWS. For both of these workflows, there are many solutions customers can choose from including AWS-native services and solutions or AWS Partner solutions (like the Media2Cloud solution). Regardless of the technology, common security best practices apply when operating in AWS. In this blog, we discussed enforcing the principle of least privilege by building a data perimeter with layered policies, ensuring assets are encrypted in transit and at rest, and that network paths are well-defined and leverage private connectivity where possible. In the final blog in the series, we will discuss monitoring and observability best practices for production workflows and how AWS services align to the supporting components of the CSAP.