AWS for M&E Blog

Multi-region workflows for Flame on AWS using Hammerspace

Scaling storage globally

Building out a multi-site studio has traditionally been a challenging task for companies and productions. Driven by the need to win additional projects, hire world-renowned talent, or capture location-specific incentives, companies inevitably confront the realities of large-capital expenditures, forecasting for maximum size, and onboarding a large quantity of local IT staff. As production needs grow, infrastructure requirements scale past original estimates. Once a production ends, talent may depart, local incentives may become less attractive, and scaling down static on-premises infrastructure becomes a daunting and costly task.

Over time, this scenario has gone from being the exception to becoming the norm – where Visual Effects (VFX) and post-production facilities open and close satellite locations on a regular cadence.

This blog post describes an architecture that allows customers on Amazon Web Services (AWS) to elastically expand and contract their cloud infrastructure (to as many locations as required) to achieve maximum impact for their business, using the Hammerspace Global Filesystem (or Global Data Environment), powered by AWS.

Hammerspace on AWS

Hammerspace provides a software-defined automated data orchestration system with a complete set of data services to unify and manage data in a global data environment. This is accomplished via Hammerspace’s scalable software that provides a single, global namespace for referring to an end user’s stored data. Essentially, this global namespace allows users to configure a single mount point per geographic site, that can be accessed across as many sites as needed. Hammerspace’s software provides the ability to share all of this stored data (regardless of where it was created) across all applicable sites when required. For a deeper overview of Hammerspace and its Global File System, please visit this previous blog post: Multi-Region Rendering with Deadline and Hammerspace.

Hammerspace drastically reduces the additional orchestration required to ensure that multiple independent sites have their stored data fully synchronized. Depending on the Hammerspace configuration used, hosted files can be sent to the required region on first access, or files can be pre-synchronized according to specified criteria (known as an ‘Objective’). Through these approaches, artists and production staff spend less time on verifying data locality; they can focus more on their core tasks. The following diagram generally outlines how files and folders are created at sites, and how files and folders can be made accessible from all desired sites.

A diagram in two sections outlining the creation of files and folders at two different sites (Studio East and Studio West), where files and folders created at one site seamlessly become accessible at the other site

Figure 1: Hammerspace Global Namespace. Files and folders can be created at any site, and they are accessible through the same file path, across all sites

Using Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS networking capabilities, this global namespace can be leveraged to provide a platform to quickly scale up the required resources for a VFX or finishing session, in any of the 32 global AWS Regions or 33 Local Zones (at the time of writing). If a studio needs to unlock talent in a region where they currently have no cloud infrastructure, then they can launch an appropriate set of cloud-based workstations, connect them to the Hammerspace global namespace, and then quickly access required data from a larger production.

Although this general technique is applicable for any workflow, this blog post focuses on leveraging Hammerspace’s capabilities to unlock collaboration in Flame specifically –  through unmanaged media workflows.

Media support for Flame on AWS using Hammerspace

Flame can read and generate managed and unmanaged media.

In a managed workflow, Flame owns and manages media directly. If it’s no longer used in a Flame project, Flame is responsible for deleting that media. To ensure that Flame provides real-time performance, managed media is also subject to performance expectations – requiring media to be stored on high performance storage.

In an unmanaged workflow, Flame references media files, but it does not directly own them. If media is no longer used in a Flame project, then Flame does not delete it. This is generally the case for external media, such as a mounted Network File System (NFS) share. Since this media is not subject to the same performance expectations as managed media, Flame therefore provides transparent tools to ‘cache’ unmanaged media – providing a seamless experience to convert it to managed media.

A diagram showing Flame’s directly-connected storage, where Flame owns this data (labelled ‘Managed Media’) and an external storage volume (labelled ‘Unmanaged Media’)

Figure 2: A comparison of Flame managed media and unmanaged media

Hammerspace support

At the 2023 National Association of Broadcasters (NAB) show, AWS, Hammerspace, and Autodesk announced support for unmanaged media workflows in Flame on AWS using Hammerspace. Autodesk’s validation confirms that users can reliably use Hammerspace global mount points for unmanaged media workflows throughout the Flame product family. This capability is especially useful in conjunction with Flame’s Shot Publish workflow.

Hammerspace’s qualification further enables our customers to move their productions to the cloud with a higher degree of flexibility around data and essence management” – Steve McNeill, Director of Engineering, Autodesk

Leveraging Flame’s Shot Publish workflow across regions

Flame provides a powerful unmanaged media workflow called ‘Shot Publish’, which is designed for multiple artists collaborating on work within the same timeline.

Using Shot Publish, Flame users can identify the shots within a timeline that need to be exported for collaboration. These shots can then be published to an unmanaged folder (such as a Hammerspace mount point). Exported data includes:

  • A folder/directory structure with consistent naming convention
  • Pre-configured Batch setups for remote artists
  • Open Clip files – used for simple versioning of renders

A diagram outlining the folder structures, Open Clip files and Renders – these are shared across multiple regions using Hammerspace technology

Figure 3: High level workflow for Shot Publish using multiple sites in Hammerspace

These files and directories are visible within seconds when Hammerspace is configured to accommodate multiple sites – that is, for any regions that share a Hammerspace global namespace and have a mount point.

One or more Flame/Flare artists, working from home or at a satellite office, can then browse to their assigned shots, open the pre-configured Batch setup, and begin their creative workflows. When further data is needed by these artists, it will be transferred to their local data repository.

A picture of the Flame user interface showing light beauty work being created at a remote site

Figure 4: Flame user interface showing light beauty work being created at a remote site

Once an artist is finished working on their version of a clip, rendering will update a corresponding Open Clip file. Shortly after, this new (updated) version will appear in the originating Flame timeline.

Under the hood, Flame reads an Open Clip file to detect a newly created version. As the Flame timeline reads newly rendered frames, they are transferred in parallel to their local site’s data repository. This synchronization is made possible by leveraging configuration options for multi-region workflows (available in Hammerspace). If a Flame artist wants to transfer all frames from a version of a clip, without stepping through (frame by frame), then that artist can choose to cache (manage) that version – all of those frames will then be transferred to high performance storage.

Configuring Hammerspace to support multi-region Flame

This section describes the configuration required to achieve the example workflow previously outlined.

An architectural diagram outlining a ‘multi-region Flame on AWS’ – with a Hammerspace configuration applied. On the East Coast, Flame artists connect to AWS cloud using a Nice DCV or HP Anyware client. Cloud-based Flame workstations mount the Hammerspace cluster using NFS. This cluster is connected to another Hammerspace cluster via VPC Peering and Shared Object Storage in the us-west-2 (Oregon) region. In this region, there are Flame and Flare EC2 instances that mount a Hammerspace cluster through NFS. Remote artists connect to those to those EC2 instances interactively via the internet

Figure 5: A high-level architectural diagram of Flame on AWS using the Hammerspace unmanaged workflow

Using Amazon S3 for shared object storage

In order for Hammerspace to be able to quickly transfer data between regions (when needed), shared object storage is used. In this scenario, an Amazon Simple Storage Service (Amazon S3) bucket is used in one of the regions included in the collaboration setup. For configuring access to this bucket, it’s recommended that appropriate permissions, as well as AWS PrivateLink for Amazon S3, are applied – this ensures that all network traffic remains within AWS.

Permissions

When configuring Hammerspace, it’s necessary to create an IAM user and generate an access key with appropriate permissions to Amazon S3 (specifically, the shared object storage bucket).

This example IAM policy (in the JSON block below) will:

  • Allow the Hammerspace UI to browse ALL of the S3 buckets in a given account
  • Allow creating a Hammerspace volume on a specific S3 bucket
  • Allow transferring data to/from a specific S3 bucket
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "BucketSpecific",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3::: [[shared-object-bucket]]"
            ]
        },
        {
            "Sid": "BucketContents",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3::: [[shared-object-bucket]]/*"
            ]
        },
        {
            "Sid": "HammerSpaceBrowseEverything",
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets"
            ],
            "Resource": "*"
        }
    ]
}

Private Amazon S3 access

In its default configuration, Hammerspace uses public Amazon S3 endpoints to access S3 buckets used in its overall collaboration setup. This is not a secure nor performant configuration, as data will leave the AWS network and traverse the public internet when data is transferred from site to site.

An architectural diagram illustrating the default options in Hammerspace, where data travels from Hammerspace services via the public internet to search for the shared S3 storage that is used for collaboration

Figure 6: Default public endpoints in Hammerspace allow data to traverse the public internet

For improved security and performance, it is recommended that data only resides within the AWS cloud while being transferred between sites. Leveraging AWS VPC Peering or AWS Transit Gateway and a PrivateLink VPC Interface Endpoint for S3, will ensure these improvements.

An architectural diagram outlining the use of VPC peering to connect multiple regions, and a PrivateLink VPC Interface Endpoint to reach the shared S3 storage that is used for collaboration. Shared data (and its transmission) are confined to the AWS Cloud

Figure 7: VPC peering to connect multiple regions, and a PrivateLink VPC Interface Endpoint to reach the shared S3 storage that is used for collaboration via Hammerspace

To apply these improvements to the Hammerspace configuration, follow these steps:

  1. Configure VPC Peering or AWS Transit Gateway between all of the appropriate VPCs in order to provide access to the shared S3 bucket (from a VPC in another AWS region).
  2. Add a PrivateLink Interface Endpoint for S3 in the region containing the shared S3 bucket.
  3. Set this interface endpoint as the S3 endpoint in the Hammerspace UI (when adding its storage system).

Configure the Amazon S3 endpoint in the Hammerspace UI

The default ‘Amazon S3’ setting in Hammerspace assumes a public endpoint for all Amazon S3 communication. In order to use private endpoints, it is necessary to select the ‘Generic S3’ option and enter the AWS PrivateLink Interface Endpoint generated in the previous step (above). That endpoint should resemble this example format:

https://bucket.vpce-12a34bc5678d-abc12345.s3.us-west-2.vpce.amazonaws.com

Hammerspace UI – assigning a S3 bucket storage system. ‘Generic S3’ is selected as the storage type. Also specified: Name, Access Key, Secret Key, and VPC Private Interface Endpoint

Figure 8: Hammerspace UI – adding a S3 bucket storage system

Once this S3 endpoint is configured, the corresponding shared volume can be added in Hammerspace to further expand its global filesystem.

Custom data synchronization via ‘Objectives’

Hammerspace provides flexible tools to manage orchestration of data across sites. The logic that governs that orchestration is defined as Objectives. Using Flame as an example, it’s ideal to have the contents of Open Clip files always be available at the location where the timeline is being used – regardless of which site updated it. This ensures that Flame will always see new versions of clips, as soon as they are created. In order to do this, a custom Objective is used, and it’s added to the Anvil at the location owning the timeline.

IF FNMATCH("./hs-mount/project/*/flame/*.clip",PATH)

THEN {SLO('place-on-local-volumes')}

Hammerspace UI’s for creating and editing Objectives. The screenshot on the right side includes the previous example of a ‘FNMATCH’ expression (further above)

Figure 9: Hammerspace UI’s for creating and editing Objectives

Moreover, for any files that are created or updated (within a site), if they also match this Objective’s expression (FNMATCH), then those files will have their contents transferred within seconds to the site managed by the Anvil (on which this Objective was set). Since Open Clip files are generally small text files, there is a trivial amount of data to transfer. Overall, this Objective mechanism can be applied to significantly improve collaborative workflows in Flame through intelligent data synchronization.

Summary

Using the approaches outlined in this blog post, geographically separated users can iterate and collaborate on the same Flame timeline in near real-time. As users create new versions of clips in Open Clip files, those files appear immediately in the Flame timeline. If a Flame user wants to play back these newly created versions, they can cache them locally – the underlying renders are transferred to other specified geographic regions, making for a truly collaborative workflow.

Prefer to see a demo of this workflow?

Watch Autodesk, AWS, and Hammerspace present this architecture on YouTube or visit the Hammerspace stand 7.B59 at IBC 2023, September 15-18.

About Autodesk

As a world leader in 3D design, engineering, and entertainment software, Autodesk delivers the broadest product portfolio, helping over 10 million customers, including 99 of the Fortune 100, to continually innovate through the digital design, visualization, and simulation of real-world project performance.

Learn More about Autodesk

Autodesk

About Hammerspace

Hammerspace is the data orchestration system that unlocks innovation and opportunity within unstructured data. It orchestrates the data to build new products, uncover new insights, and accelerate time to revenue across industries like AI, scientific discovery, machine learning, extended reality, autonomy, video production and more. Hammerspace delivers the world’s first and only solution to connect global users with their data and applications in any region or data center, breaking down data silos.

Hammerspace

Mike Owen

Mike Owen

Mike is a Principal Solutions Architect, Visual Computing at AWS.

Andy Hayes

Andy Hayes

Andy is a Senior Solutions Architect, Visual Computing at AWS.

DJ Rahming

DJ Rahming

DJ is a Senior Solutions Architect, Visual Computing at AWS.

David Israel

David Israel

David is a Senior Architect, Spatial Engineering at AWS. David is passionate about spatial computing. His forte is backend software engineering and he provides consulting expertise in M&E, reality capture, and enterprise-grade immersive technologies - AR/VR/XR applied to Unreal Engine, VRED.

Sean Wallitsch

Sean Wallitsch

Sean is a Senior Solutions Architect, Visual Computing at AWS.