AWS for M&E Blog
Designing for studio-grade security
Media and Entertainment (M&E) customers are using the cloud for a wide range of use cases including many aspects of content creation, post production, and large parts of the business-to-consumer camera-to-screen supply chain. As these core media workloads move to the cloud, it’s imperative to examine the security implications of a multi-tenant public cloud environment in light of various asset classifications and workflows. The M&E industry has seen advances and modernization within workflows such as visual effects (VFX) studios that have been using the Cloud for burst rendering scenarios, or supply chains moving to serverless/microservices patterns using mostly the Cloud as a data-plane while they are typically forced to write and store the content back to the on-premises storage. In this article we will discuss the feasibility and examples of secure architecture patterns for high-value content creation using the Cloud. A well-architected workflow with appropriate cloud-based usage along with the processing fleet can have positive implications on efficiency of the underlying cloud-based content creation pipelines by effectively leveraging the content Lake concept, where instead of moving content back and forth the processes can be moved closer to the content. This article assumes that readers are familiar with basic security concepts. Recommended reading includes the Security Pillar of the AWS Well Architected paper and the AWS compliance whitepaper as prerequisites to this article. These best practices can be applicable to all workflows, including content production and distribution in M&E.
Specific to M&E, The Motion Picture Association of America (MPAA) created a set of content security guidelines for storage and processing workloads on the cloud. These best practices are made up of selective requirements from a set of industry security standards based on ISO, OWASP, CSA, PCI, NIST800-53, and others. Alignment with MPAA guidelines requires a self-assessment or inspection without a formal audit process. AWS has published a detailed security mapping report that shows how alignment with the MPAA controls is achieved, see this guidance document. In addition to the MPAA best practices guidelines, most major studios have their own set of security requirements. Service providers are required to follow these requirements in their application environments for any specific studio project. AWS has worked with a third-party auditor (such as Independent Security Evaluators ISE) to assess AWS security controls and build reference templates of AWS security controls for content production workloads including rendering workloads for content creation and Media asset management and archive in the cloud. These templates can be downloaded from the AWS artifact page and are based on the security standards of major studios for tier-1 assets across these workloads. The goal here is not to cover the security controls of each of the underlying AWS services in this context comprehensively, rather to point the reader to a few common architecture patterns and security controls that can be used in conjunction with the above-mentioned templates to create secure workload environments that can be audited by the content/rights owner or their designated auditors.
Studio security requires access and infrastructure isolation across project environments (or customers in the case of service providers). AWS offers flexible control options for AWS account(s) managing the underlying resources. These options are designed to help provide cost allocation, agility, and security. Multiple accounts can provide the highest level of resource and security isolation. Customers can use organizational units (OUs) with AWS Organizations to create hierarchical and logical account groupings. Customers can also use service control policies (SCPs) to filter and restrict at the account level what services and actions the users, groups, and roles in those accounts can do. A Cross-Account Manager solution can automate the provisioning of IAM roles and policies for cross-account access. In many cases, service providers such as VFX or post-production houses can leverage this automation to create a separate account for each studio project.
One concern that content owners often have with cloud-based persistent storage is the proper removal of content after it has been processed or created at the end of the workflow. The AWS security and compliance whitepaper discusses the topic of cleaning user data before re-provisioning underlying storage devices to other customers in great depth. There are several best practices for both deleting content from block volumes after the content has been processed and for processing the content directly from Amazon S3, and we reference the security lens for the AWS Well Architected whitepaper. Additionally, services like AWS Key Management Service (KMS) provide the capability to build an end-to-end encrypted workflow with managed key distribution across different processing nodes including customer-provided keys.
To keep different project environments logically isolated, it is a best practice to have separate Virtual Private Clouds (VPC) per project. There are several best practices for VPC design based on the underlying customer scenarios. One example is to leverage Amazon Simple Storage Service (Amazon S3) VPC endpoints and restrict content access based on the VPC application function as shown in the figure below. The VPC/network controls can be leveraged to restrict private only (VPN based) access to an S3 bucket or the underlying storage. This enables customers to build an access hierarchy on a workload/content tier basis and apply access on an as-needed basis. Figure 1 (above) details such as scenario where the content holding bucket is restricted based on appropriate policies and accessible via Direct Connect or VPN only.
One of the most powerful security differentiators of the AWS environment is the maturity of the platform with respect to value-added services such as comprehensive logging, monitoring, and automation. AWS CloudTrail is a free service that can log every API call across deployed services (including compute, storage, and KMS). Amazon S3 not only offers comprehensive access logs but also bucket- and object-level API calls that enable customers to monitor any failed attempts at content access. Customers can also enable VPC Flow Logs to capture information about the IP traffic going to and from network interfaces in their VPC. Amazon CloudWatch can further be used to monitor these logs for specific regex values, or patterns, and set alarms on interesting criteria. CloudWatch can send notifications based on specific rule sets and can also trigger AWS Lambda functions to take specific actions to self-remediate problems. A customer can use the underlying AWS security template for a specific environment approved by a content owner and monitor/assess applications using AWS Config. It enables customers to assess, audit, and evaluate the configurations of their AWS environment by continuously monitoring AWS resource configurations allowing customers to automate the evaluation of recorded configurations against desired configurations. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
A Content Owner can leverage a P/SaaS or a service provider application hosted in the AWS cloud while keeping full control and visibility into their content access. The content owner can encrypt the content in the various storage tiers using their own AWS KMS-key. The content owner then creates an AWS IAM role that provides read permissions from their S3 bucket to the authenticated/authorized user. The IAM role designation also comes with the permission to be able to call the AWS KMS service and get a data key for encrypted content. The IAM role is passed to the 3rd party environment via cross-account access that has to be explicitly done by the content owner. The processing instances in the 3rd party environment are launched with an IAM profile with the above permissions. This allows for a seamless key transfer and encryption of content end to end within the workflow along with constant monitoring and logging.
The service provider can also leverage the cloud as persistent storage for writing the output/high valued content. The cross-account access and KMS offer the capability for Content Owners to define and own a master key and the subsequent data key that can be used/accessed by the service providers (internal or external/3rd party) on an ‘as-needed’ basis in a fully managed AWS KMS service for processing the content within the Content Lake. The high-level architecture for leveraging persistent store (EBS volumes) is showcased in figure 3 (below). The following two information articles can be used to build such as system:
- Custom AMIs (Amazon Machine Images) can be created with encrypted EBS volumes. These AMIs can be created by the admin user-group/role and shared across regions as well as accounts (third parties or other organizations) as appropriate.
- The data volumes can also be encrypted and shared across accounts (as appropriate) with appropriate KMS key management
We have showcased some important security controls from the AWS Cloud Platform and relevant design patterns. These controls can enable content owners and their service providers to build secure content production environments for M&E workloads using AWS. Besides several best practices discussed here, the common theme is that lesser or no movement of content enables more secure content production workflow. Leveraging Cloud with proper security controls enables content owners to easily have their content and its processing environments within their purview in a secure environment. Various parties can then provide their services to the content in the content owner’s environment with full audit, end-to-end encryption, logging and monitoring.