AWS for M&E Blog

Create a conda package and channel for AWS Deadline Cloud

AWS Deadline Cloud is a fully managed service from Amazon Web Services (AWS) that enables you to have a scalable, fully managed visual compute farm up and running in minutes. Executing render jobs for digital content creation (DCC) applications like Blender, Houdini, Maya, and Nuke can be a rapid, turnkey experience with Deadline Cloud Sevice-Managed Fleets (SMF). Those applications are included with the service as conda packages for workers in the SMF deployment. But what if you are running a different DCC version, using third-party plugins, or customizing your pipeline code?

This blog post walks through how to build your own conda package, then host a conda channel in an Amazon S3 bucket to make the package available to Deadline Cloud render workers. You can create packages that bundle entire applications and run without dependencies, or build upon the thousands of packages maintained and hosted by the conda-forge community. With the ability to have custom conda packages and channels, you can extend your Deadline Cloud pipeline to support virtually any creative tool you use.

Following the steps in this blog post, you will use a Deadline Cloud queue to build a Blender 4.1 conda package starting from the official upstream binary builds, configure the production queue to use a new custom conda channel to find Blender 4.1, and render one of the Blender demo scenes on your farm.

Package build architecture

Architectural diagram depicting two queues - a production queue with read-only access to conda packages in an Amazon S3 bucket, and a package build queue with read-write access to upload new packages. Both queues share the same S3 bucket but have separate IAM roles controlling their S3 bucket permissions. This separation allows the production queue to use custom packages while the package build queue can create and manage them.

Figure 1: The relationship between the production queue (where Deadline Cloud rendering jobs normally execute) and the package building queue we create in this blog post.

The architecture deployed in this blog post adds a new package building queue to your farm, intended for use exclusively by conda package build jobs.

Highlights of this architecture:

  • Production queues have read-only access to the /Conda prefix of the S3 bucket, so they can use but not modify any of your custom conda packages.
  • The package building queue has read/write access to the /Conda prefix of the S3 bucket, so package build jobs can upload newly built packages and reindex the conda channel.
  • The package building queue has a separate job attachments prefix in the S3 bucket, so its data is separated from production data.
  • Package build jobs use the same fleet you already created for your production queue, reducing the number of separate infrastructure components you need to manage.

Prerequisites

This walkthrough requires the following:

Configure queue permissions for custom conda packages

While you can build conda packages locally, in this blog post we build packages with AWS Deadline Cloud. This simplifies delivery of the finished packages to the Amazon S3 bucket that we use as our conda channel, reduces the dependencies for building on your own compute, and allows you to build multiple packages without tying up your computer with build processes.

A custom conda channel for AWS Deadline Cloud will require an Amazon S3 Bucket. You may create a new one, or reuse an existing S3 bucket from one of your queues. You can find S3 bucket information for a queue in the job attachments tab on the queue details page for the desired queue in the Deadline Cloud:

A screenshot of the AWS Deadline Cloud console showing the "Queue details" page for a queue named "Production Queue". It displays various configuration details about the queue, including its status, ID, ARN, creation date/time, and the AWS account/farm it belongs to. Notably, it highlights the "Job attachments bucket" name ("default-queue-s3-bucket") which is an S3 bucket used by this queue to store job data and attachments.

Figure 2: An example queue details page for “Production Queue”, which has a Job attachments bucket named “default-queue-s3-bucket”. 

The job attachments tab lists the currently configured bucket. Also note the queue service role, “Awsdeadlinecloudqueuerole” located above the job attachments bucket. Your bucket name and queue role name will be different.

We need both the bucket name and the queue service role from the queue details page to configure the production queue. The goal is for the production queue to have read-only access to the new /Conda prefix in the S3 bucket, while the package build queue has read/write permissions. To edit the role permissions, click the queue service role on this page. This takes us straight to the AWS Identity and Access Management (IAM) page for that role.

When viewing the queue service role, select [+] to expand the policy starting with the name AWSDeadlineCloudQueuePolicy, and then select “Edit”.

By default, you will see a limited number of permissions for this queue role, as it obeys the principle of least privilege and is limited to only accessing specific resources in your AWS account. You can use either the visual or the JSON editor to add a new section like the following example. Replace the bucket name and account number, highlighted here in orange, with your own. The new addition to the policy must allow the s3:GetObject and s3:ListBucket permissions for both the bucket and the new /Conda prefix.

		{
			"Effect": "Allow",
			"Sid": "CustomCondaChannelReadOnly",
			"Action": [
				"s3:GetObject",
				"s3:ListBucket"
			],
			"Resource": [
				"arn:aws:s3:::default-queue-s3-bucket",
				"arn:aws:s3:::default-queue-s3-bucket/Conda/*"
			],
			"Condition": {
				"StringEquals": {
					"aws:ResourceAccount": "111122223333"
				}
			}
		},

Create a package building queue

Next, we create a new package building queue to which we send jobs to build specific conda packages for the conda channel. From the farm page in the Deadline Cloud console, select create queue.

For the S3 bucket, you can use the same bucket as the production queue or create a new one. We recommend creating a new prefix, such as /DeadlineCloudPackageBuild, so the artifacts here stay separate from your normal Deadline Cloud job attachments. For fleet associations, you can use one of your existing fleets or you can create an entirely new fleet if your current fleet is unsuitable.

For the queue service role, we recommend creating and using a new queue service role, which is set up and automatically given read/write permissions to the S3 bucket and prefix you specified.

Configure the package build queue permissions

Just as we modified the production queue role previously, we must similarly modify the package build queue’s role to give it read/write access to the /Conda prefix.

From the queue details page for the package build queue, click on the queue service role, then [+], then “Edit”. Since this set of permissions needs to be read/write, the policy addition includes all four permissions that the default queue destination does: s3:GetObject, s3:PutObject, s3:ListBucket, and s3:GetBucketLocation, plus the ability to delete objects, with s3:DeleteObject. These permissions are needed for package build jobs to upload new packages and to reindex the channel. Please replace the bucket name and account number, highlighted in orange, with your own.

		{
			"Effect": "Allow",
			"Sid": "CustomCondaChannelReadWrite",
			"Action": [
				"s3:GetObject",
				"s3:PutObject",
				"s3:DeleteObject",
				"s3:ListBucket",
				"s3:GetBucketLocation"
			],
			"Resource": [
				"arn:aws:s3:::default-queue-s3-bucket",
				"arn:aws:s3:::default-queue-s3-bucket/Conda/*"			],
			"Condition": {
				"StringEquals": {
					"aws:ResourceAccount": "111122223333"
				}
			}
		},

Build your own Blender 4.1 package

The following instructions use git from a bash-compatible shell to get an Open Job Description (OpenJD) package build job and some conda recipes from the Deadline Cloud Samples GitHub. Windows installations of git include a version of bash, git BASH, that you can use. You also need to have the Deadline Cloud CLI installed, and be logged into Deadline Cloud monitor or have some other form of AWS authentication. The last step is submitting those OpenJD job bundles to the queue using the Deadline Cloud CLI.

Run deadline config gui in the bash-compatible shell to open the configuration GUI, and set the default farm and queue to the package building queue that you created.

With git clone, clone the Deadline Cloud samples GitHub repository, switch to its conda_recipes directory, and find a script called submit-package-job. Running this script the first time provides you with instructions for downloading Blender, as shown in the following example. Follow the instructions and when the download is complete, run the job again to create the submission.

$> deadline config gui
$> git clone https://github.com/aws-deadline/deadline-cloud-samples.git
$> cd deadline-cloud-samples/conda_recipes
$> ./submit-package-job --recipe blender-4.1/
No S3 channel provided, using job attachments bucket default                                                                                                     
ERROR: File blender-4.1/blender-4.1.1-linux-x64.tar.xz not found.                                                                                                
To submit the blender-4.1 package build, you need the archive blender-4.1.1-linux-x64.tar.xz                                                                     
To acquire this archive, follow these instructions...
$> ./submit-package-job --recipe blender-4.1/
No S3 channel provided, using job attachments bucket default                                                                                                     
Building package into conda channel s3://default-queue-s3-bucket/Conda/Default
 
+ deadline bundle submit build_linux_package -p RecipeName=blender-4.1 -p OverrideSourceArchive=blender-4.1/blender-4.1.1-linux-x64.tar.xz -p RecipeDir=blender-4
.1/blender-4.1 -p 'S3CondaChannel=s3://default-queue-s3-bucket/Conda/Default' -p CondaChannels=                                               
Submitting to Queue: Package Build Queue                                                                                                                         
...
Job creation completed successfully                                                                                                                              

Use the Deadline Cloud monitor to view the progress and status of the job as it runs. With the default 2 vCPU and 8 GiB RAM instance size specifications, it took 22 minutes to build the package, upload it to the S3 bucket, and then reindex the channel. The default fleet settings are relatively small for building conda packages and rendering, so we recommend increasing them.

A screenshot from the AWS Deadline Cloud Monitor showing details of a conda package build job named "CondaBuild: blender-4.1". The job had two steps - "PackageBuild" which took around 21 minutes to complete successfully, and "ReindexConda" which took under a minute. There was a single task under the "PackageBuild" step that also completed successfully in around 20 minutes. The monitor displays the job's priority, start/end times, and duration.

Figure 3. The Deadline Cloud monitor, with the package build job highlighted.

In the lower left of the Deadline Cloud monitor are the two steps of the job: building the package and then reindexing. In the lower right are the individual tasks for each step. In this example, there is only one task for each step.

Right click on the task for the package building step, and select “View logs”. On the right, a list of session actions shows how the task is scheduled on the worker host. The session actions are:

  1. Sync attachments. This action copies the input job attachments data or mounts a virtual file system, depending on the setting used for the job attachments file system.
  2. Launch Conda. This action is from the OpenJD queue environment that the Deadline Cloud console onboarding flow adds by default. Because the job specifies no conda packages, it finishes quickly and does not create a conda virtual environment.
  3. Launch CondaBuild Env. This action creates a customized conda virtual environment that includes the software needed to build a conda package and to reindex a channel. It installs them from the conda-forge channel.
  4. Task run. This action runs the Blender package build and uploads the resulting package to S3.

A screenshot from the AWS Deadline Cloud Monitor showing the logs for the "PackageBuild" step of the "CondaBuild: blender-4.1" job. This log displays details of the task execution, including the resource usage summary with total run time, CPU, memory, and disk utilization. It shows the successful upload of the built "blender-4.1.1-0.conda" package to the specified S3 bucket and "Conda/Default" prefix for the custom conda channel.

Figure 4. The log viewer within the Deadline Cloud monitor.Logs are stored in a structured format within Amazon Cloudwatch. After a job completes, check “View logs for all tasks” to view additional logs regarding the setup and tear down of the environment that the job runs in.

If you’re curious about how the package building job is implemented, have a look into its source code. For example, you will find that within the channel reindexing step is a mutex that ensures only one package building job performs reindexing at a time. It’s implemented as an OpenJD environment, and uses Amazon S3 strong consistency for a correct implementation while avoiding the need for additional infrastructure to run the job.

Add a conda channel to the production queue environment

To use the S3 conda channel and the Blender 4.1 package, jobs need to add the s3://<job-attachments-bucket>/Conda/Default channel location to the CondaChannels parameter of jobs you submit to Deadline Cloud. The pre-built Deadline Cloud submitters provide you fields where you can specify custom conda channels and conda packages.

You can avoid modifying every job by making a small edit to the conda queue environment for your production queue. Open the Deadline Cloud console and navigate to the queue environments tab for your production queue. Enable the checkbox in the list for the “Conda” queue environment, and then select “Edit”. For a Customer-Managed Fleet, you can enable the usage of conda packages by using one of the OpenJD conda queue environment samples in the Deadline Cloud samples GitHub.

In the section that specifies the CondaChannels parameter is a line that sets its default value as follows:

default: "deadline-cloud"

Edit that line to start with your newly created S3 conda channel:

default: "s3://<job-attachments-bucket>/Conda/Default deadline-cloud"

Because Service-Managed Fleets enable strict channel priority for conda by default, building blender in your S3 channel stops conda from considering the deadline-cloud channel at all. This means that a job that includes blender=3.6, that previously succeeded by using the deadline-cloud channel, will fail now that you have built Blender 4.1.

Submit a Blender 4.1 job to the production queue

Now that you have a package built and your queue configured to use the channel it’s in, it is time to render with the package. First, switch to your production queue by running the CLI command deadline config gui once more, and selecting your production queue.

If you don’t have a Blender scene already, head over to the Blender demo files page, and choose one to download. We’ve chosen the file Blender 3.5 – Cozy Kitchen scene in the Blender Splash Artwork section, created by Nicole Morena and released under a CC-BY-SA license. The download consists of the file called blender-3.5-splash.blend, and it can easily render even on the quickstart onboarding fleet. To render other scenes, you may need to increase the fleet configuration values from the Deadline Cloud console.

The Deadline Cloud samples GitHub repository contains a sample job that can render a Blender scene by using the following commands.

$> deadline bundle submit blender_render \
     -p CondaPackages=blender=4.1 \
     -p BlenderSceneFile=/path/to/downloaded/blender-3.5-splash.blend \
     -p Frames=1
Submitting to Queue: Production Queue
...
Job creation completed successfully

In the Deadline Cloud monitor, select the task for the job you submitted, and then select the option to view the log. On the right side of the log view, select the session action called “Launch Conda.” You can see that it searched for Blender 4.1 in the two conda channels configured in the queue environment, and it found the package in the S3 channel.

When the job finishes, you can download the output to view the result.

Clean up

  1. To delete the queue, follow the “Delete a queue” instructions from the Deadline Cloud documentation.
  2. To remove the /Conda prefix, navigate to the S3 bucket in the AWS console, open the bucket, select the /Conda prefix, select “Delete”, and follow the instructions.
  3. Delete any files downloaded or git cloned by running the normal deletion steps for your OS.
  4. To remove the added permissions from the IAM policy, use the previous instructions to navigate to each policy and remove the sections you added.

Conclusion

In this blog post we described how to modify queue role permissions, build a custom conda package for a new version of software, and add an S3 bucket to act as a conda channel for your production render queue. We designed Open Job Description and AWS Deadline Cloud to handle a large variety of compute and render jobs, expanding your pipeline far beyond the built-in SMF support and pre-built submitters we provide. Working from the provided examples, start with something simple such as a small plugin or Nuke gizmo, and start customizing the capabilities of your Deadline Cloud farm today.

Mark Wiebe

Mark Wiebe

Mark Wiebe is a Principal Engineer at AWS Thinkbox, focused on solutions for creative content.

Sean Wallitsch

Sean Wallitsch

Sean is a Senior Solutions Architect, Visual Computing at AWS.