AWS Spatial Computing Blog

Deploying NVIDIA Omniverse Nucleus on Amazon EC2

Introduction

This post aims to get users up and running with NVIDIA Omniverse Enterprise Nucleus Server on Amazon Elastic Compute Cloud (Amazon EC2). Here I’ll outline the requirements for Enterprise Nucleus Server deployment and dive deep into the technical steps for getting Nucleus running in your Amazon Web Services (AWS) account.

What is Omniverse?

NVIDIA Omniverse is a scalable, multi-GPU, real-time platform for building and operating metaverse applications, based on Pixar’s Universal Scene Description (USD) and NVIDIA RTX technology.

NVIDIA Omniverse Nucleus is the database and collaboration engine of Omniverse. With Omniverse Nucleus, teams can have multiple live users connected using different applications at once. This allows people to use the application they are most comfortable with and opens a lot of doors for rapid iteration. Learn more on the NVIDIA Omniverse Introduction Documentation.

Nucleus operates under a publish-and-subscribe model and enables efficient live synchronization between NVIDIA Omniverse applications. Changes to USD scenes are transmitted in real-time between connected Omniverse clients. Clients connect using the publish-and-subscribe pattern, which makes it possible for them to receive changes submitted in near real-time.

Other Nucleus features include users and group management, assets access control lists (ACLs) for fine-grained access control, versioning with checkpoints, single sign-on (SSO) with SAML Authentication, and TLS encryption support.

Why AWS?

There are multiple reasons to deploy Nucleus on the AWS Global Cloud Infrastructure. With AWS you can connect distributed users all over the globe. Our security, identity, and access management controls allow you to retain complete control over your data. Also, with the variety of compute instance types and storage solutions AWS offers, you can right size your infrastructure and fine tune performance as needed.

Solution Overview

The following steps outline a solution that implements the basic components of a Nucleus deployment. To handle communication from end users an Amazon EC2 instance configured as a NGINX reverse proxy is deployed in a public subnet. The reverse proxy accepts TLS traffic and has a TLS certificate from Amazon Certificates Manager (ACM). Typically, this component would be an Elastic Load Balancer (ELB), but the Nucleus Server requires path rewrites in the request which is not currently supported by an ELB.

The Enterprise Nucleus Server is an Amazon EC2 instance deployed to a private subnet that only accepts traffic from the reverse proxy subnet. The Enterprise Nucleus Server is running the Nucleus Enterprise Stack, which is deployed as a Docker Compose Stack. The Nucleus instance will need a NAT Gateway and Internet Gateway to communicate with the NVIDIA NGC. This procedure uses the basic Nucleus stack with TLS support, and not SSO.

Prerequisites

Deploying Omniverse Nucleus on Amazon EC2

Omniverse Enterprise Nucleus on Amazon EC2 Architecture

Register a domain and create a hosted zone with Amazon Route 53

First, you will need a hosted zone and a domain for the Nucleus Server. Amazon Route 53 (Route 53) allows registration of a domain, such as my-omniverse.com, and creation of a subdomain, such as nucleus.my-omniverse.com, for the Nucleus Server. When registering a domain, communication occurs with the domain registrar. It is best to do this step manually and then reference the Hosted Zone ID, created by Route 53, in the subsequent configuration steps.

See this page for more information on registering a domain and creating a hosted zone: Registering a new domain.

Configure the CDK Stack

Next, you will configure a CDK stack with basic resources for the Nucleus deployment.

Step 1. Open a terminal and create a project folder for your CDK app

The name of the folder will become the application name. For this procedure, nucleus-app is the name used.

Step 2. Change directory into the folder created in Step 1 and initialize your CDK app with the following command:

cdk init sample-app --language=typescript

Now your project structure should be the following:

nucleus-app/
     bin/
         nucleus-app.ts
     lib/
         nucleus-app-stack.ts
 
additional config files …

nucleus-app.ts is the main entry point for the app and the file that subsequent CDK commands will reference. When viewing this file, you can see it imports lib/nucleus-app-stack.ts, which is where you’ll put custom code for your deployment.

Step 3. Run a basic “Hello World” test

Deploy the starter CDK stack with cdk deploy. This will produce a basic stack and confirm your CLI and CDK are properly configured.

Step 4. Set default account and AWS Region environment values

Open bin/nucleus-app.ts and set the default account and Region environment (env) values. The contents of the file should look like the following:

#!/usr/bin/env node
import * as cdk from 'aws-cdk-lib';
import { NucleusAppStack } from '../lib/nucleus-app-stack';

const app = new cdk.App();
new NucleusAppStack(app, 'NucleusAppStack', {
    env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION },
});

Step 5. Remove sample resources

Open lib/nucleus-app-stack.ts and remove the sample Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) resources. Your file should now look like the following:

import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';

export class NucleusAppStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);
    
    // custom resources go here

  }
}

Step 6. Add the below CDK libraries, as these are required in subsequent steps

import * as route53 from 'aws-cdk-lib/aws-route53'
import { Tags, CfnOutput } from 'aws-cdk-lib'
import * as s3 from 'aws-cdk-lib/aws-s3'
import * as ec2 from 'aws-cdk-lib/aws-ec2'
import * as iam from 'aws-cdk-lib/aws-iam'
import * as acm from 'aws-cdk-lib/aws-certificatemanager'

Define Stack Resources

Next, you will define custom infrastructure resources required for the deployment. Code samples in this section need to be added inside the constructor of the NucleusAppStack class

Step 1. Create an Amazon Simple Storage Service (Amazon S3) bucket for artifacts

First, create a simple Amazon S3 bucket that will be used to transfer artifacts from our local client to Amazon EC2 instances. As per security best practices, enable encryption, enforce SSL, and block public access. Then create an AWS Identity and Access Management (IAM) policy that allows access to list bucket and get objects from the bucket. This policy will be attached to our Amazon EC2 instance profile role.

const artifactsBucket = new s3.Bucket(this, 'artifactsBucket', {
    encryption: s3.BucketEncryption.S3_MANAGED,
    enforceSSL: true,
    blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
});

const getBucketObjectsPolicy = new iam.PolicyDocument({
    statements: [
    new iam.PolicyStatement({
        actions: [
            "s3:GetObject",
            "s3:ListBucket",
        ],
        resources: [
            `${artifactsBucket.bucketArn}`,
            `${artifactsBucket.bucketArn}/*`
        ]
    })]
});

Step 2. Add an Amazon Virtual Private Cloud (VPC) configuration

Specify the private subnet that contains the NAT gateway with a route to the internet. Then provision two security groups that the proxy server and the Nucleus Server will use.

const eip = new ec2.CfnEIP(this, 'natGatewayElasticIP', {
      domain: 'vpc'
    });

    const vpc = new ec2.Vpc(this, 'nucleusVpc', {
      cidr: '10.0.0.0/20',
      natGateways: 1,
      subnetConfiguration: [{
        name: 'publicSubnetNatGateway',
        subnetType: ec2.SubnetType.PUBLIC,
        cidrMask: 24
      }, {
        name: 'privateSubnet',
        subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
        cidrMask: 24
      }],
      natGatewayProvider: ec2.NatProvider.gateway({
          eipAllocationIds: [eip.attrAllocationId]
      }),
    });

    const proxySG = new ec2.SecurityGroup(this, 'reverseProxySG', {
      vpc: vpc,
      allowAllOutbound: true,
      description: "Reverse Proxy Security Group"
      });

    const nucleusSG = new ec2.SecurityGroup(this, 'nucleusSG', {
      vpc: vpc,
      allowAllOutbound: true,
      description: "Nucleus Server Security Group"
    });

Step 3. Add security group ingress rules

Configure the proxy and Nucleus security groups to allow traffic on required ports. The Nucleus security group only allows traffic from the proxy security group. The proxy security group allows traffic from a specific CIDR range. You’ll want to set this to a range you will use to connect to the server. For example, you can use the IP address of the client machine you plan to connect from. Then, you enter that IP appended with a network mask as the CIDR range. For this solution, the recommended network mask is /32.

const allowedCidrRange = 'ip-address/network-mask'
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(443), "HTTPS Traffic");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3180), "Auth login");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3100), "Auth Service");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3333), "Discovery Service");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3030), "LFT");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3019), "Core API");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3020), "Tagging Service");
    proxySG.addIngressRule(
      ec2.Peer.ipv4(allowedCidrRange), ec2.Port.tcp(3400), "Search Service");

    const proxySGId = proxySG.securityGroupId
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(8080), "HTTP Traffic");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3180), "Auth login");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3100), "Auth Service");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3333), "Discovery Service");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3030), "LFT");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3019), "Core API");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3020), "Tagging Service");
    nucleusSG.addIngressRule(
      ec2.Peer.securityGroupId(proxySGId), ec2.Port.tcp(3400), "Search Service");

Step 4. Add TLS Certificate and set the domain from Step 1 for validation

Note: the root-domain variable must be set to the domain registered with the Route 53 hosted zone from Step 1.

const rootDomain = root-domain;
    const fullDomain = 'nucleus.'+rootDomain;
    const hostedZone = route53.HostedZone.fromLookup(this, 
        'PublicHostedZone', {domainName: rootDomain}
    );

    const certificate = new acm.Certificate(this, 'PublicCertificate', {
      domainName: rootDomain,
      subjectAlternativeNames: [`*.${rootDomain}`],
      validation: acm.CertificateValidation.fromDns(hostedZone),
    });

Note: Currently there is no additional management of this CNAME record. Meaning when you no longer require it, you’ll have to remove it manually from your Route 53 Hosted Zone.

Step 5. Add reverse proxy resources

For the reverse proxy, configure it with Nitro Enclaves enabled. Enclaves provides features to create isolated compute environments to protect and securely process highly sensitive data. In this case that’s our TLS certificate. On top of that, Nitro Enclaves has support for integration with Amazon Certificates Manager. This means Certificates Manager can automatically handle the rotation of the certificate. For more information, see AWS Nitro Enclaves User Guide.

Starting from the Certificate Manager for Nitro Enclaves AMI, create a c5.xlarge instance with 32GB of storage. In this case c5.xlarge was chosen as one of the smallest available instances required for the Nitro Enclaves AMI. Configure a basic instance role with the AmazonSSMManagedInstanceCore policy. This allows you to connect to the instance with AWS Systems Manager (SSM) and avoid opening the instance to SSH traffic over the internet.

Finally, attach a “dummy” IAM policy to the reverse proxy. This is an empty policy which will get updated with the configuration scripts.

Note, if your Region is not in the list of Regions below, review the AMI listing on the AWS Marketplace AWS Certificate Manager for Nitro Enclaves or the AWS Documentation for finding the correct AMI ID, Finding AMI IDs.

// AWS Certificate Manager for Nitro Enclaves AMI
    const proxyServerAMI = new ec2.GenericLinuxImage({
      'us-west-1': 'ami-0213075968e811ea7',    //california
      'us-west-2': 'ami-01c4415fd6c2f0927',    //oregon
      'us-east-1': 'ami-00d96e5ee00daa484',    //virginia
      'us-east-2': 'ami-020ea706ac260de21',    //ohio
      'ca-central-1': 'ami-096dd1150b96b6125', //canada
      'eu-central-1': 'ami-06a2b19f6b97762cb', //frankfurt
      'eu-west-1': 'ami-069e205c9dea19322',    //ireland
      'eu-west-2': 'ami-069b79a2d7d0d9408'     //london
    })

    const proxyInstanceRole = new iam.Role(this, 'proxyInstanceRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
      description: 'EC2 Instance Role',
      managedPolicies: [
          iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore')
      ],
      inlinePolicies: {
        getBucketObjectsPolicy: getBucketObjectsPolicy
      }
    });

    const cfnInstanceProfile = new iam.CfnInstanceProfile(this, 'proxyInstanceProfile', {
      roles: [proxyInstanceRole.roleName],
    });

    // using CfnInstance because it exposes enclaveOptions
    const cfnInstance = new ec2.CfnInstance(this, 'reverseProxyServer',  {
      blockDeviceMappings: [{
          deviceName: '/dev/xvda',
          ebs: {
            encrypted: true,
            volumeSize: 32,
          }
        }],
      enclaveOptions: {
          enabled: true,
      },
      imageId: proxyServerAMI.getImage(this).imageId,
      instanceType: 'c5.xlarge',
      securityGroupIds: [proxySG.securityGroupId],
      subnetId: vpc.selectSubnets({subnetGroupName: 'publicSubnetNatGateway'}).subnetIds[0],
      tags: [{
          key: 'Name',
          value: 'Nucleus-ReverseProxy',
        }],
      iamInstanceProfile: cfnInstanceProfile.ref
    })

    new route53.CnameRecord(this, `CnameApiRecord`, {
      recordName: fullDomain,
      zone: hostedZone,
      domainName: cfnInstance.attrPublicDnsName,
    });


    const revProxyCertAssociationPolicy = new iam.ManagedPolicy(this, 'revProxyCertAssociationPolicy', {
        statements: [
        new iam.PolicyStatement({
          actions: ["s3:GetObject"], resources: ["*"]
        })
      ]
    })
    proxyInstanceRole.addManagedPolicy(revProxyCertAssociationPolicy)

Step 6. Add Nucleus Server resources

Next, configure the Nucleus Server. Start with the Ubuntu, 20.04 LTS AMI with c5.4xlarge as the instance type. C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. The instance has 16 vCPUs and 32GB of Memory. An Amazon Elastic Block Store (EBS) volume is attached to the instance with 512GB of storage. These specs were chosen to be sufficiently large for a proof of concept.

The instance user data script is configured to install docker, docker-compose, and the AWS CLI.

const nucleusServerAMI = new ec2.GenericLinuxImage({
      'us-west-1': 'ami-0dc5e9ff792ec08e3',    //california
      'us-west-2': 'ami-0ee8244746ec5d6d4',    //oregon
      'us-east-1': 'ami-09d56f8956ab235b3',    //virginia
      'us-east-2': 'ami-0aeb7c931a5a61206',    //ohio
      'ca-central-1': 'ami-0fb99f22ad0184043', //canada
      'eu-central-1': 'ami-015c25ad8763b2f11', //frankfurt
      'eu-west-1': 'ami-00c90dbdc12232b58',    //ireland
      'eu-west-2': 'ami-0a244485e2e4ffd03'     //london
    })

    const nucleusEbsVolume: ec2.BlockDevice = {
      deviceName: '/dev/sda1',
      volume: ec2.BlockDeviceVolume.ebs(512, {
          encrypted: true,
      })
    };

    const nucleusInstanceRole = new iam.Role(this, 'nucleusInstanceRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
      description: 'EC2 Instance Role',
      managedPolicies: [
          iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonSSMManagedInstanceCore')
      ],
      inlinePolicies: {
        getBucketObjectsPolicy: getBucketObjectsPolicy
      }
    });

    const nucleusUserData = `
    #!/bin/bash
    sudo apt-get update

    # docker
    sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    sudo apt-get -y update
    sudo apt-get -y install docker-ce docker-ce-cli containerd.io

    # docker compose
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    sudo chmod +x /usr/local/bin/docker-compose

    # aws cli
    sudo curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    sudo apt-get install unzip
    sudo unzip awscliv2.zip
    sudo ./aws/install
    sudo rm awscliv2.zip
    sudo rm -fr ./aws/install
    `

    const nucleusServerInstance = new ec2.Instance(this, "NucleusServer", {
      instanceType: new ec2.InstanceType("c5.4xlarge"),
      machineImage: nucleusServerAMI,
      blockDevices: [nucleusEbsVolume],
      vpc: vpc,
      role: nucleusInstanceRole,
      securityGroup:  nucleusSG,
      userData: ec2.UserData.custom(nucleusUserData),
      vpcSubnets: vpc.selectSubnets({subnetGroupName: 'privateSubnet'}),
      detailedMonitoring: true,
    });

    Tags.of(nucleusServerInstance).add("Name", "Nucleus-Server");

Step 7. Configure stack outputs

Next, add output values so you can easily reference them later.

new CfnOutput(this, 'region', {
      value: this.region
    }).overrideLogicalId('region');

    new CfnOutput(this, 'artifactsBucketName', {
      value: artifactsBucket.bucketName
    }).overrideLogicalId('artifactsBucketName');

    new CfnOutput(this, 'tlsCertifcateArn', {
      value: certificate.certificateArn
    }).overrideLogicalId('tlsCertifcateArn');

    new CfnOutput(this, 'proxyInstanceRoleArn', {
      value: proxyInstanceRole.roleArn
    }).overrideLogicalId('proxyInstanceRoleArn');

    new CfnOutput(this, 'proxyCertAssociationPolicyArn', {
      value: revProxyCertAssociationPolicy.managedPolicyArn
    }).overrideLogicalId('proxyCertAssociationPolicyArn');

    new CfnOutput(this, 'nucleusServerPrivateDnsName', {
      value: nucleusServerInstance.instancePrivateDnsName
    }).overrideLogicalId('nucleusServerPrivateDnsName');

    new CfnOutput(this, 'domain', {
      value: fullDomain
    }).overrideLogicalId('domain');

Step 8. Deploy the stack

cdk deploy

Once this is complete, you will have the basic resources required and next you will configure them.

If you encounter the following CDK deploy error:

[Error at /NucleusAppStack] Found zones: [] for dns:DOMAIN, privateZone:undefined, vpcId:undefined, but wanted exactly 1 zone

Check that you have the correct domain specified and that your hosted zone exists in the Route 53 console Route 53 Hosted zones.

Step 9. Note the stack output values. You’ll use them in the future

Outputs:
NucleusAppStack.*artifactsBucketName*  = nucleusappstack-artifactsbucket…
NucleusAppStack.*domain*  = nucleus.my_omniverse.com
NucleusAppStack.*nucleusServerPrivateDnsName*  = ip-...us-west-2.compute.internal
NucleusAppStack.*proxyCertAssociationPolicyArn*  = arn:aws:iam::...:policy/...
NucleusAppStack.*proxyInstanceRoleArn*  = arn:aws:iam::...:role/...
NucleusAppStack.*region*  = ...
NucleusAppStack.*tlsCertifcateArn*  = arn:aws:acm:...:...:certificate/...

Configure The Reverse Proxy Server

Step 1. Associate Enclave certificate with proxy instance IAM role

The first thing you have to do with the reverse proxy is associate your certificate with the IAM role that the Nitro Enclave uses. In the following code, please replace tls-certificate-arn, proxy-instance-role-arn, proxy-cert-association-policy-arn, and region in the below script with stack output values from above.

Note: The following script was written in Python 3.9. If you have issues with conflicting python versions. It’s recommended that you set a local virtualenv. For more information, see Python Tutorial Virtual Environments and Packages.

CERT_ARN = tls-certificate-arn
ROLE_ARN = proxy-instance-role-arn
ROLE_POLICY_ARN = proxy-cert-association-policy-arn
REGION = region
 
import boto3
import json
 
ec2_client = boto3.client('ec2')
iam_client = boto3.client('iam')
iam_rsrc = boto3.resource('iam')
 
response = ec2_client.associate_enclave_certificate_iam_role(
    CertificateArn=CERT_ARN,
    RoleArn=ROLE_ARN
)
 
print(response)
 
bucket = response['CertificateS3BucketName']
s3object = response['CertificateS3ObjectKey']
kmskeyid = response['EncryptionKmsKeyId']
 
# update policy with association resources
policy = iam_rsrc.Policy(ROLE_POLICY_ARN)
policyJson = policy.default_version.document
cur_version = policy.default_version_id
 
policyJson['Statement'] = [
{
    "Effect": "Allow",
    "Action": [
    "s3:GetObject"
    ],
    "Resource": [f"arn:aws:s3:::{bucket}/*"]
},{
    "Sid": "VisualEditor0",
    "Effect": "Allow",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": f"arn:aws:kms:{REGION}:*:key/{kmskeyid}"
},{
        "Effect": "Allow",
        "Action": "iam:GetRole",
        "Resource": ROLE_ARN
}]
 
response = iam_client.create_policy_version(
    PolicyArn = ROLE_POLICY_ARN,
    PolicyDocument= json.dumps(policyJson),
    SetAsDefault= True
)
 
print(response)
 
response = iam_client.delete_policy_version(
    PolicyArn = ROLE_POLICY_ARN,
    VersionId = cur_version
)
 
print(response)

This script associates an identity and IAM role with an AWS Certificate Manager (ACM) certificate. This enables the certificate to be used by the ACM for Nitro Enclaves application inside an enclave. For more information, see Certificate Manager for Nitro Enclaves in the Amazon Web Services Nitro Enclaves User Guide. The script then updates the IAM role policy with permissions to get its own role, download the certificate, and decrypt it.

Save the script to a file and run it from the terminal:

python ./associate_enclave_cert.py

Step 2. Configure Nginx conf

NVIDIA provides a sample Nginx config for the Nucleus deployment. It is packaged within a provided archive file. At the time of writing this, the latest is nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz

Open the archive and look for: ssl/nginx.ingress.router.conf

This file needs to be updated and then placed at /etc/nginx/conf.d/nginx.conf on the reverse proxy instance.

First, you need to update the config with configuration outlined in the AWS Certificate Manager for Nitro Enclaves guide: Nitro Enclaves application: AWS Certificate Manager for Nitro Enclaves.

At the top of the file, in the main context add the following:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

ssl_engine pkcs11;

After the line, # Configure your SSL options as required by your security practices, add the below snippet:

ssl_protocols TLSv1.2;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout  10m;
ssl_prefer_server_ciphers on;

# Set this to the stanza path configured in /etc/nitro_enclaves/acm.yaml
include "/etc/pki/nginx/nginx-acm.conf";

Next, update the config file with the Nucleus Server private DNS address and the fully qualified domain for your server. Replace instances of my-ssl-nucleus.my-company.com with your domain. Then, replace instances of BASE_STACK_IP_OR_HOST with the nucleusServerPrivateDnsName from the stack outputs above.

Step 3. Copy the .conf file to Amazon S3

aws s3 cp ssl/nginx.ingress.router.conf s3://artifactBucketName/nginx.conf

Step 4. Connect to the proxy instance

From your web browser, navigate to the EC2 Dashboard in the AWS Console, select the Nucleus-ReverseProxy instance, and press the Connect button.

Select the Session Manager tab, then press the Connect button.

Session Manager tab of the Nucleus Reverse Proxy instance

Step 5. In the terminal, copy the nginx.conf file path from Amazon S3 to /etc/nginx/

sudo aws s3 cp s3://*artifactBucketName*/nginx.conf  ./nginx.conf 
sudo mv ./nginx.conf /etc/nginx/conf.d/nginx.conf

Step 6. While still in the proxy server terminal, rename the sample ACM for Nitro Enclaves configuration file from /etc/nitro_enclaves/acm.example.yaml to /etc/nitro_enclaves/acm.yaml using the following command:

sudo mv /etc/nitro_enclaves/acm.example.yaml /etc/nitro_enclaves/acm.yaml

Step 7. Update the acm.yaml certificate_arn value

Using your preferred text editor, open /etc/nitro_enclaves/acm.yaml. In the ACM section, update certificate_arn, with the ARN of the certificate from our stack. This is the tls-certificate-arn from the stack outputs above. Save and close the file.

Step 8. Start the Nginx server

sudo systemctl start  nitro-enclaves-acm.service
sudo systemctl enable  nitro-enclaves-acm

Step 9. Confirm the server is accepting TLS requests to your domain

curl https://nucleus.my-omniverse.com

You’ll see a generic HTML template as output.

Configure Nucleus Server

Much of the following comes from NVIDIA’s documentation on deploying a Nucleus Server. Review these docs for more information: Enterprise Nucleus Server Quick Start Tips.

Step 1. From your local computer using the AWS CLI, copy the Nucleus Stack archive to Amazon S3

aws s3 cp  ./nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz s3://*artifactBucketName*/nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz

Step 2. Connect to the Nucleus Server with EC2 Session Manager

With your web browser, navigate to the EC2 Dashboard in the AWS Console, select the Nucleus-Server instances, press the Connect button, and then press the Connect button again on the Session Manager tab.

Step 3. In the Nucleus-Server terminal, change directory to the home directory, and then copy the Nucleus Stack from S3

cd ~
aws s3 cp s3://*artifactBucketName*/nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz  ./nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz 

Step 4. Unpack the archive to an appropriate directory, then cd into that directory

omniverse_root=/opt/ove
sudo mkdir -p $omniverse_root
sudo tar xzvf  nucleus-stack-2022.4.0+tag-2022.4.0-rc.1.gitlab.6522377.48333833.tar.gz -C  $omniverse_root --strip-components=1
cd ${omniverse_root}/base_stack

Step 5. Update nucleus-stack.env

With your preferred text editor, review the nucleus-stack.env file. It is recommended that you review this file in its entirety. You will use this file to confirm that you accept the NVIDIA Omniverse end user license agreement.

Then update the following nucleus-stack.env variables as needed

ACCEPT_EULA          Review the notes in the .env file
SECURITY_REVIEWED    Review the notes in the .env file
SERVER_IP_OR_HOST    Set to the Nucleus Server private DNS name
SSL_INGRESS_HOST     Set as fully qualified domain name eg, nucleus.my_omniverse.com
MASTER_PASSWORD      Omniverse master user password
SERVICE_PASSWORD     Omniverse service user password
INSTANCE_NAME        Omniverse instance name
DATA_ROOT            Omniverse Data root directory
WEB_PORT             NVIDIA recommends that you set this to 8080, this is also what the nginx.conf is configured to expect 

Step 6. Generate secrets required for authentication

Note the following is required because you are not using SSO integration at this time. See the security notes in nucleus-stack.env for more information.

sudo chmod +x  ./generate-sample-insecure-secrets.sh
sudo ./generate-sample-insecure-secrets.sh

Step 7. Pull the Nucleus docker images
sudo docker-compose –env-file ${omniverse_root}/base_stack/nucleus-stack.env -f ${omniverse_root}/base_stack/nucleus-stack-ssl.yml pull

Step 8. Start the Nucleus stack

sudo docker-compose --env-file ${omniverse_root}/base_stack/nucleus-stack.env -f ${omniverse_root}/base_stack/nucleus-stack-ssl.yml up -d

Usage

Back on your local machine, test a connection to your Nucleus Server by pointing your web browser to the domain you specified in the .env file. You should be greeted with the following login dialog:

Omniverse Login Window

Here you can use the Master or Service Username and Password configured in the nucleus-stack.env, or press Create Account. Then you’ll be presented with a navigator view of your Nucleus Server content

Omniverse Nucleus Server content window

Cleanup

Step 1. Disassociate the Nitro Enclave certificate by running the dissacociate_enclave_cert.py script

print(response)
CERT_ARN = "*tlsCertifcateArn*"
ROLE_ARN = "*proxyInstanceRoleArn*"
 
import boto3
 
ec2_client = boto3.client('ec2')
 
response = ec2_client. disassociate_enclave_certificate_iam_role(
    CertificateArn=CERT_ARN,
    RoleArn=ROLE_ARN
)

Step 2. Delete the stack by running cdk desktroy from the nucleus-app application folder.

Conclusion

This post provides the basics to get up and running with NVIDIA Omniverse Nucleus on Amazon EC2 using the Docker Compose container. This post walked through the setup procedures of the Amazon EC2 Nucleus and reverse proxy servers, implemented S3 for storage and retrieval of configuration files, and Route 53 private hosted zones for secure, private access to your Omniverse data.

This deployment of Nucleus on Amazon EC2 allows your teams, no matter where they are located, to collaborate and interact in real-time while building 3D products, applications, and experiences.

To learn more about spatial computing at AWS, continue following along here on the Spatial Computing Blog channel.

Additional Reading

This information may also be found on the AWS GitHub repository, NVIDIA Omniverse Nucleus on Amazon EC2.

AWS Services

Amazon EC2, secure and resizable compute capacity for virtually any workload
Amazon Route 53, a reliable and cost-effective way to route end users to Internet applications
Amazon S3, object storage built to retrieve any amount of data from anywhere
Amazon EBS, easy to use, high performance block storage at any scale
AWS Certificates Manager for Nitro Enclaves, public and private TLS certificates with web servers running on Amazon EC2 instances

NVIDIA Omniverse Nucleus

Nucleus Overview
Nucleus Documentation