AWS for M&E Blog

Set up LucidLink with service managed fleet scripts for AWS Deadline Cloud

Modern creative workflows demand high-performance rendering capabilities that can scale on demand without the operational overhead of managing infrastructure. AWS Deadline Cloud service-managed fleets address this challenge. It handles the complexity of Amazon Elastic Compute Cloud (Amazon EC2) instance management, operating system updates, virtual private cloud (VPC) configuration, and networking. You can then focus on your rendering workloads rather than infrastructure operations.

When combined with the cloud-native file system of LucidLink, you gain a powerful solution that eliminates infrastructure management both in the cloud and on-premises. This integration provides creative teams a way to perform large-scale rendering operations with minimal infrastructure footprint, while maintaining instant access to project files from anywhere.

We will demonstrate how to configure AWS Deadline Cloud service-managed fleets with LucidLink integration, providing you with a solution for scalable, high-performance rendering workflows.

AWS Deadline Cloud now supports service-managed fleet configuration scripts, making it quicker than ever to integrate third-party storage solutions, such as LucidLink, into your rendering workflows. We will walk you through setting up a complete Deadline Cloud environment with LucidLink for high-performance, scale-out, cloud-based rendering.

Prerequisites

Before you begin, make certain you have:

In this example, we will use the job execution flow to:

  • Install LucidLink, using administrative privileges
  • Mount the appropriate LucidLink Filespace on the render node, before job execution
  • Unmount the Filespace when the job completes
  • Create a custom job to validate the connectivity to the Filespace

In order to do this, we will use these 4 touchpoints in the render lifecycle:

Figure 1. AWS Deadline Cloud LucidLink Integration Workflow

What you’ll build

By the end of this tutorial, you’ll have:

  • A Deadline Cloud farm with LucidLink-enabled fleets
  • Automated LucidLink client installation through fleet configuration scripts
  • OpenJD job templates for dynamic filesystem mounting for each job

Step 1: Store LucidLink credentials in AWS Secrets Manager

Securely store your LucidLink credentials using AWS Secrets Manager. These credentials will be retrieved by your rendering jobs to mount the LucidLink filesystem:

  • Open the AWS Secrets Manager console
  • Click on: Store a new secret
  • Select: Other type of secret
  • Add the following key-value pairs:

{
  "username": "your-lucidlink-username",
  "password": "your-lucidlink-password"
}

  • Name your secret: lucidlink-credentials
  • Complete the creation process

Step 2: Set up your Deadline Cloud farm

Create your Deadline Cloud farm that will host your rendering workloads:

  • Navigate to the AWS Deadline Cloud console
  • Click on: Create farm
  • Configure your farm settings:
    • Farm name: lucidlink-render-farm
  • Click on: Create farm

Step 3: Create a service-managed fleet with LucidLink support

Step 3a: Create the LucidLink installation script

Create a configuration script that installs the LucidLink client on your fleet instances. This script focuses only on installation and daemon setup. It is important to test this script on a standalone EC2 instance first to confirm it works in your environment before deploying to your fleet.

<p>#!/bin/bash 

# LucidLink Client Installation Script 
# This script only installs the LucidLink client and sets up the daemon service 
# Filesystem mounting is handled separately via OpenJD job templates 

set -ex

# -----------------------------Install tree utility-----------------------------

# Install tree for directory visualization
yum install tree -y

# ----------------------Install and configure LucidLink 3-----------------------
echo "Installing LucidLink client..." 
# Download latest stable LucidLink RPM package
wget -q https://www.lucidlink.com/download/new-ll-latest/linux-rpm/stable/ 
-O lucidinstaller.rpm<br />ls -l lucidinstaller.rpm
# Install the LucidLink client
yum install -y lucidinstaller.rpm 

# Create systemd service for persistent daemon
echo "Creating systemd service for LucidLink daemon..." 
cat << EOF > /etc/systemd/system/lucidlink.service 
[Unit]
Description=LucidLink Filespace Daemon 
After=network-online.target
Wants=network-online.target

[Service] 
Type=simple
ExecStart=/usr/local/bin/lucid3 daemon
ExecStop=/usr/local/bin/lucid3 exit
Restart=on-failure
RestartSec=5
User=root
Group=root
[Install]
WantedBy=multi-user.target
EOF

# Enable and start the LucidLink daemon service
echo "Enabling and starting LucidLink daemon service..." 
systemctl daemon-reload
systemctl enable lucidlink
systemctl start lucidlink

# Create base mount directory with proper permissions
echo "Creating mount point..." 
mkdir -p /mnt/lucid <br />chmod a+rwx /mnt/lucid

# Verify installation
echo "LucidLink client installation complete" 
lucid3 status

Now create your fleet with the LucidLink configuration script:

  • In your Deadline Cloud farm, click on: Create fleet
  • Configure basic settings:
    • Fleet name: lucidlink-render-fleet
    • Fleet type: Service-managed
  • In the Worker capabilities section:
    • Select CPU or GPU instances
    • Operating system: Linux
    • Click on: Next
  • In the Set Up Additional configurations – Optional page:
    • Check: Enable Worker Configuration script
    • Script content: Paste your LucidLink configuration script

Step 4: Create the LucidLink mount queue environment

Step 4a: Create LucidLink Job Queue

  • In your Deadline Cloud farm, click on: Create queue
  • Configure basic settings:
    • Name: lucidlink-queue
    • Job attachments bucket name: lucidlink-attachments
    • Associate fleets: lucidlink-render-fleet

This will create a job queue that users can submit render jobs to and will leverage the render fleet we created earlier.

Step 4b: Update Queue IAM role permissions

Find the IAM role that the Queue is using, and add permission to access the secret in Secrets Manager:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetSecretValue"
      ],
      "Resource": "arn:aws:secretsmanager:*:*:secret:lucidlink-credentials*"
    }
  ]
}

We can use Queue Environments to run custom scripts directly in Deadline Cloud, whenever a job starts on a specific queue. In this example we will run Bash scripts at two points during the job execution:

  • As the job starts (Queue Environment - Enter)
  • After the job is complete (Queue Environment - Exit)

In the Queue Environment, we can expose parameters to the user through parameterDefinitions. In this case, we're allowing the user to change the following settings, and also setting defaults:

  • Lucid Secret Name (The name of the secret stored in Secrets)
  • The LucidLink Workspace and Filespace

In Open Job Description YAML, this is what that looks like:

parameterDefinitions:
- name: LucidSecretName
  type: STRING
  description: AWS Secrets Manager secret name containing LucidLink credentials
  default: lucidlink-credentials
- name: LucidWorkspace
  type: STRING
  description: LucidLink workspace name to mount
  userInterface:
    control: LINE_EDIT
    label: Workspace Name
  default: lucid-workspace
- name: LucidFilespace
  type: STRING
  description: LucidLink filespace name to mount
  userInterface:
    control: LINE_EDIT
    label: Filespace Name
  default: lucid-filespace

On Queue Environment Enter

#!/bin/bash
# Exit on error, undefined vars, pipe failures
set -euo pipefail

echo "Mounting LucidLink filesystem..."
# Create mount point directory
MOUNTPOINT="/mnt/lucid/{{Param.LucidWorkspace}}/{{Param.LucidFilespace}}"

mkdir –p ${{MOUNTPOINT}}

# Get credentials from AWS Secrets Manager
SECRET=$(aws secretsmanager get-secret-value \
    --secret-id "{{Param.LucidSecretName}}" \
    --query 'SecretString' --output text)

# Parse credentials from JSON
LUCID_USERNAME=$(echo "$SECRET" | jq -r '.username')
LUCID_PASSWORD=$(echo "$SECRET" | jq -r '.password')

# Mount filesystem
echo "${LUCID_PASSWORD}" | lucid3 link \
    --fs "{{Param.LucidFilespace}}.{{Param.LucidWorkspace}}" \
    --user "${LUCID_USERNAME}" \
    --mount-point "${MOUNTPOINT}" \
    --fuse-allow-other

# Verify mount is accessible
if [ -d "${MOUNTPOINT}" ] && [ "$(ls -A ${MOUNTPOINT})" ]; then
    echo "Mount verification successful"
else
    echo "Warning: Mount point appears empty or inaccessible"
    exit 1
fi

On Queue Environment Exit

echo "Unmounting LucidLink filesystem..."
lucid3 unlink --fs "{{Param.LucidFilespace}}.{{Param.LucidWorkspace}}" || true
echo "LucidLink filesystem unmounted"

If we put this together into a single Open Job Description environment, we can add it for use in our Deadline Queue.

specificationVersion: 'environment-2023-09'
parameterDefinitions:
- name: LucidSecretName
type: STRING
description: AWS Secrets Manager secret name containing LucidLink credentials
default: lucidlink-credentials
- name: LucidWorkspace
type: STRING
description: LucidLink workspace name to mount
userInterface:
control: LINE_EDIT
label: Workspace Name
default: lucid-workspace
- name: LucidFilespace
type: STRING
description: LucidLink filespace name to mount
userInterface:
control: LINE_EDIT
label: Filespace Name
default: lucid-filespace
environment:
name: LucidLinkMount
description: Environment for mounting LucidLink filesystem
script:
actions:
onEnter:
command: "{{Env.File.MountLucidLink}}"
onExit:
command: "{{Env.File.UnmountLucidLink}}"
embeddedFiles:
- name: MountLucidLink
type: TEXT
runnable: true
data: |
#!/bin/bash
# Exit on error, undefined variables, and pipe failures
set -euo pipefail
 
echo "Mounting LucidLink filesystem..."
# Create mount point directory
MOUNTPOINT="/mnt/lucid/{{Param.LucidWorkspace}}/{{Param.LucidFilespace}}"
mkdir -p "${MOUNTPOINT}"
 
# Get credentials from AWS Secrets Manager
SECRET=$(aws secretsmanager get-secret-value \
--secret-id "{{Param.LucidSecretName}}" \
--query 'SecretString' --output text)
 
# Parse credentials from JSON
LUCID_USERNAME=$(echo "$SECRET" | jq -r '.username')
LUCID_PASSWORD=$(echo "$SECRET" | jq -r '.password')
 
# Mount filesystem
echo "${LUCID_PASSWORD}" | lucid3 link \
--fs "{{Param.LucidFilespace}}.{{Param.LucidWorkspace}}" \
--user "${LUCID_USERNAME}" \
--mount-point "${MOUNTPOINT}" \
--fuse-allow-other
 
# Verify mount is accessible
if [ -d "${MOUNTPOINT}" ] && [ "$(ls -A ${MOUNTPOINT})" ]; then
echo "Mount verification successful"
else
echo "Warning: Mount point appears empty or inaccessible"
exit 1
fi
- name: UnmountLucidLink
type: TEXT
runnable: true
data: |
echo "Unmounting LucidLink filesystem..."
lucid3 unlink --fs "{{Param.LucidFilespace}}.{{Param.LucidWorkspace}}" || true
echo "LucidLink filesystem unmounted"

In order for this to be loaded in AWS Deadline Cloud, select the desired queue, and navigate to the 'Queue Environments' tab:

Figure 2. AWS Deadline Cloud queue environments table

Select Actions, Create New from YAML, and paste in the queue environment script.

Figure 3. AWS Deadline Cloud edit queue environment page

Step 5: Test your setup

Figure 4. Completed AWS Deadline Cloud task execution

To validate your LucidLink integration, create and submit a test job using the AWS Deadline Cloud CLI.

Install the Deadline Cloud CLI
First, install the Deadline Cloud CLI tools: pip install ‘deadline[gui]’

Install the Deadline Cloud Monitor
The Deadline Cloud Monitor is required for job authentication and monitoring. Download and install it from the AWS Deadline Cloud console:

  • Navigate to the AWS Deadline Cloud console.
  • Go to the service page,
  • Download the Deadline Cloud Monitor for your operating system.
  • Install the application by following the provided instructions.

Create a test job template
Create a basic OpenJD job template to verify your LucidLink filesystem is properly mounted and accessible. Save the following content as template.yaml in a new directory:

specificationVersion: 'jobtemplate-2023-09'
name: Test LucidLink Mount
steps:
- name: Print Tree of LucidLink Mount
  script:
    actions:
      onRun:
        command: '{{Task.File.runScript}}'
    embeddedFiles:
    - name: runScript
      type: TEXT
      runnable: true
      data: |
        #!/usr/bin/env bash
        set -ex
        tree --du -h -L 6 /mnt/lucid

Submit and monitor the test job.

Submit your test job using one of the following methods:

  • Command line submission: deadline bundle submit --farm-id <farm-id> --queue-id <queue-id> <bundle-directory>
  • GUI submission (allows parameter customization): deadline bundle gui-submit <directory-name>

Verify the integration
Once the job is completed, verify that your LucidLink filesystem was successfully mounted:

  • Open the Deadline Cloud Monitor application
  • Locate your test job in the job list
  • Click on View Logs to examine the job output
  • Confirm that the logs display the directory tree structure of your LucidLink mount point at /mnt/lucid

A successful test will show the complete directory hierarchy of your LucidLink filespace, confirming that the filesystem was properly mounted and is accessible during the job execution.

Considerations

This modular architecture separates installation from mounting, allowing different jobs to use different filesystem configurations. The LucidLink daemon runs as a systemd service for persistence across job executions, while mount points require chmod 755 permissions and the --fuse-allow-other flag for access. Credentials are securely retrieved from AWS Secrets Manager, and the OpenJD template automatically handles mounting and unmounting for clean resource management. Always test your configuration script on a standalone EC2 instance before deploying to your fleet.

Troubleshooting

If you encounter issues during setup or job execution, start by checking the fleet configuration script logs in the AWS Deadline Cloud console to verify the LucidLink client installed correctly. For mounting problems, right-click on a failed task and select ‘View Worker Logs’  to confirm the daemon is running, and review job-specific logs for any filesystem access errors. Permission issues typically indicate that either the fleet's IAM role lacks access to your Secrets Manager secret or the mount point permissions need adjustment. Template-related problems can be identified by validating your OpenJD YAML syntax and testing job templates with the Deadline Cloud CLI before submitting production workloads.

Cleanup

If you decide to discontinue using Deadline Cloud with LucidLink, In the AWS Deadline Cloud console, delete your fleet and queue. If you created a test EC2 instance, terminate it from the EC2 console. Optionally, delete the AWS Secrets Manager secret containing your LucidLink credentials if it's no longer needed.

Conclusion

By separating LucidLink installation from filesystem mounting, you achieve a more flexible and maintainable setup. The service-managed fleet handles the one-time client installation, while OpenJD job templates manage filesystem mounting per job, providing:

  • Modularity: Different jobs can mount different filesystems or use different configurations
  • Resource Efficiency: Filesystems are only mounted when needed and automatically unmounted
  • Security: Credentials are handled securely within job environments
  • Reliability: Proper error handling and cleanup in both installation and mounting phases

This architecture provides high-performance shared storage for your rendering workflows, while maintaining security best practices and streamlining operations. The combination of the global file system of LucidLink and the scalable rendering infrastructure of AWS Deadline Cloud provides creative teams a way to collaborate seamlessly across geographic boundaries.

Contact an AWS Representative to know how we can help accelerate your business.

Further reading

For more information about AWS Deadline Cloud service-managed fleets and configuration scripts, visit the AWS Deadline Cloud documentation. To learn more about Open Job Description (Open JD), visit the OpenJD specifications GitHub page, or the OpenJD documentation page.

About LucidLink

LucidLink is a storage collaboration platform that frees creative teams to work together from anywhere. With a single shared filespace protected by zero-knowledge encryption, your team can instantly and securely access, edit and share projects of any size. Learn more about LucidLink on AWS.

Zach Willner

Zach Willner

Zach is a Senior Partner Solutions Architect for Media and Entertainment at AWS. His role is building a diverse ecosystem of partners on AWS for Media and Entertainment.

DJ Rahming

DJ Rahming

DJ is a Senior Solutions Architect, Visual Computing at AWS.