AWS for Industries

Protect 5G subscriber credentials in the Cloud with AWS Nitro Enclaves

Strong security, as required by telecommunication, or financial services, can depend on keeping certain cryptographic material, such as keys, secret. On-premises, this has traditionally been done by using hardware security modules (HSM). This post presents a cloud-native solution using AWS Nitro Enclaves to fulfill the same functions as HSM (keeping cryptographic material secret) and support the migration of these services to the Cloud. Our solution supports elasticity, high-availability, and its deployment can be fully automated. We present our solution in the context of a 5G mobile network use-case.

With mobile networks, a vital functionality is mutual authentication of the subscriber device (such as smartphones) with the network. Authentication is performed by a cryptographic challenge-and-response protocol. The protocol is based on a symmetric key shared between the subscriber and the network. On the subscriber device, the key is protected within the universal subscriber identity module (USIM or simply SIM). The SIM has cryptographic capabilities to compute the authentication data needed for the challenge-and-response authentication protocol. In the network, the key is protected within the Unified Data Management (UDM) network function. The UDM hosts functions related to data management. In particular, it hosts the Authentication Credential Repository and Processing Function (ARPF). One task of the ARPF is to compute the authentication data (also called authentication vector or AV) needed for the challenge-and-response authentication protocol on the network side. For this task, the ARPF processes the key in its secure environment. The key is protected from physical attacks and never leave the secure environment of the ARPF unprotected. When implemented on-premises the cryptographic capabilities of the ARPF are generally provided by a custom HSM. HSM for 5G networks are specialized because the cryptographic functions to implement the authentication protocol are specific to 5G mobile networks. They are standardized by the third Generation Partnership Project (3GPP) organization (see 3GPP technical specification 33.501 for details on the authentication protocol).

In on-premises deployments, the implementation of 3GPP cryptography for mobile networks depends on physical HSM. When moving to the Cloud, this forces network function suppliers to implement the ARPF cryptographic capabilities outside of HSM boundaries (such as on general purpose compute instances), or to use a hybrid deployment with an on-premises custom HSM. This increases complexity and creates a challenge to protect the shared secret key while in-use (see also the discussion on the two dimensions of AWS confidential computing). Nitro Enclaves, an Amazon Elastic Compute Cloud (Amazon EC2) capability to setup and manage isolated compute environments, is meant to address this challenge. Thanks to Nitro Enclaves, network operators can forego the use of an HSM on-premises without compromising the security of the ARPF.

In this post, we show you how to use Nitro Enclaves to implement the cryptographic capabilities of the ARPF on AWS. Furthermore, given the high-availability requirements of mobile networks, we show you how to deploy Nitro Enclaves in a high availability configuration. We provide a prototype implementation of an ARPF within an enclave. We also provide a patch for the Open5GS 5G Core implementation to support the enclave-based ARPF implementation. We are using Open5GS because it can be readily installed on AWS and allows you to easily experiment with 5G networks on AWS. Moreover, its open-source nature allows for the modification to support our enclave-based ARPF.

Before you jump into the rest of this post, note that we assume you have familiarity with Nitro Enclaves and its terminology. For an introduction, consult our documentation (and video), the related concepts, the build process, and our getting started guide.

Solution overview

You deploy a 5G network on AWS. The deployment comprises a 5G Core network, along with a software-based base-station and subscriber device. To support high-availability, you deploy a pool of enclave-enabled EC2 instances across multiple Availability Zones (AZs). The enclaves, which provide secure and isolated compute environments, implement and run the ARPF cryptographic capabilities. And with their attestation feature, you can verify an enclave’s identity and make sure that only authorized code is running inside it. These EC2 instances are managed by an AWS Auto Scaling group combined with an AWS Network Load Balancer (NLB). Provisioning of Nitro Enclaves, within the parent EC2 instance, along with the database of shared secret key is automated. By default, Nitro Enclaves do not provide observability: they do not provide ready-made metrics or logs. We instrument the code running inside the enclave to provide an API for status monitoring exposed over the Nitro Enclaves secure local communication channel. Software on the EC2 parent instance monitors the health of the enclave through this API and exposes an endpoint for NLB health-checks. Therefore, in case of faulty or unhealthy enclave or parent instance, the whole instance can be replaced automatically. Here is pseudo-code for exposing the endpoint for monitoring. If the /status endpoint is queried, then a ping command is sent to the enclave in a JSON payload. Upon successful answer from the enclave, a 200-response code is sent back.

class HTTPServer():

    def GET():
        # Parse URI to get command
        if (path == '/status'):
            command = '{ "command":"ping" }'
            # Check enclave
            try:
                enclave_status = send(cid,enclave_port,command)
                test_status = json.loads(enclave_status)['Status']
            except TimeoutError:
                send_response(503)
                send_header("Content-type", "text/json")
                end_headers()
                wfile.write(bytes('{ "Status": "Fail", "Message": "Enclave timed-out" }', "utf-8"))
                return
            if test_status == 'success':
                send_response(200)
            else:
                send_response(503)
            send_header("Content-type", "text/json")
            end_headers()
            wfile.write(bytes(enclave_status, "utf-8"))

Our sample code available at this GitHub link provides all the artefacts to deploy the complete solution in this post. We use the AWS Cloud Development Kit (AWS CDK) to automate the deployment. The base-station and subscriber device are provided by the UERANSIM open-source project. Our sample code replaces the Open5GS ARPF implementation by an implementation running within Nitro Enclaves. Our patch to the UDM implementation of Open5GS replaces two function calls used during authentication by API calls directed to the Nitro Enclaves implementation. With this patch, the shared secret keys are never exposed in plaintext outside of the Enclave, making sure of protection while in-use. Namely, in the UDM, the Open5GS function,

milenage_generate(udm_ue->opc, udm_ue->amf, udm_ue->k, …)

where opc and k are the secrets, is replaced by a function,

enclave_generate_auth_vector(udm_ue->supi, udm_ue->amf, …)

that communicates with the enclave. Note that opc and k are not used anymore (they are the secrets protected by the enclave). Instead, the supi is used within the enclave to lookup the corresponding opc and k values.

Before you can deploy the 5G network, the Open5Gs and UERANSIM binaries must be compiled from the sources. In addition, the enclave image file must be generated. Our sample code automates both actions, and creates two golden Amazon Machine Image (AMI), following the immutable infrastructure pattern. The first AMI supports the deployment of the 5G Core and radio access network (RAN). The second AMI supports the deployment of the Nitro Enclaves.

Because Nitro Enclaves supports multiple CPU architectures, we use EC2 T4g and R6g instances with AWS Graviton2 CPUs, using their high-performance and low cost. AWS Graviton-based instances cost up to 20% less than comparable x86-based EC2 instances (T3 and R6i) and use up to 60% less energy.

The following diagram depicts the high-level architecture of the deployment. In the following description, we assume light knowledge of the 3GPP network functions nomenclature and architecture (see Figure 2 of this 5G system overview for more detail). The deployment creates two AWS virtual private clouds (VPC), one for the RAN part and one for the core network part (Core). The RAN VPC comprises an EC2 instance that hosts the subscriber device and the base-station. The core VPC comprises the 5G Core network and deploys it over at least six EC2 instances. The access control and mobility gateway instance contain the 3GPP AMF and SMF. The user data service instance contains the 3GPP UDM, AUSF, and UDR. The ARPF instances belong to an autoscaling group and are the parent instance of the Nitro Enclaves containing the ARPF capabilities, along with the database of subscriber identifiers and related shared secret keys. The traffic gateway router instance comprises the 3GPP UPF. Finally, one instance hosts additional 3GPP control-plane functions (such as NRF, SCP, and NSSF). The traffic gateway router handles user-plane traffic, all other instances handle control-plane traffic.Figure 1 - high-level architecture of the deployment.

Figure 1: High-level architecture of the deployment. The deployment creates two AWS virtual private clouds (VPC), one for the RAN part and one for the core network part (Core). The RAN VPC comprises an EC2 instance that hosts the subscriber device and the base-station. The core VPC comprises the 5G Core network and deploys it over at least six EC2 instances. The access control and mobility gateway instance contain the 3GPP AMF and SMF. The user data service instance contains the 3GPP UDM, AUSF, and UDR. The ARPF instances belong to an autoscaling group and are the parent instance of the Nitro Enclaves containing the ARPF capabilities, along with the database of subscriber identifiers and related shared secret keys. The traffic gateway router instance comprises the 3GPP UPF. Finally, one instance hosts additional 3GPP control-plane functions (such as NRF, SCP, and NSSF). The traffic gateway router handles user-plane traffic, all other instances handle control-plane traffic.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Alternatively, you can use AWS Cloud9 to create an environment with all dependencies already installed.

Make sure to use an AWS Region where Enclaves are supported. For a list of AWS Regions, see the Nitro Enclaves User Guide.

Walkthrough

In order to deploy the preceding architecture, the sample code at this GitHub link deploys a series of AWS CDK apps. Namely

  • A CDK app to build the necessary software binaries and create an AMI to deploy and run the 5G network. The stacks make use of AWS CodeBuild and Amazon EC2 Image Builder.
  • A CDK app to automatically build the enclave image file and create an AMI with the image file already embedded. The stack uses Amazon EC2 Image Builder.
  • A CDK app using AWS Systems Manager to automate runtime configuration tasks of the EC2 instances. These runtime configuration tasks comprise the deployment of Amazon CloudWatch agent configuration files, of the Open5gs configuration files and related scripts, and of scripts to manage the life-cycle of the Enclaves.
  • A CDK app that deploys the 5G core and RAN, along with artefacts to support the encryption and decryption of the database of subscriber identifiers and related shared secret keys.

Clone the repository and bootstrap

Before AWS CDK apps can be deployed in your AWS environment, you must provision preliminary resources. This process is called bootstrapping.

Clone the git repository, change directory, and bootstrap
1. Clone the repository.

2. Within your terminal, change directory to the root of the related source directory.

3. Run the following commands at your terminal prompt:

ACCOUNT=$(aws sts get-caller-identity --query "Account" --output text)
echo AWS Account number: ${ACCOUNT}
REGION=$(aws configure get region)
echo Default AWS Region: ${REGION}
APPLICATION=ArpfEnclave
cdk bootstrap aws://${ACCOUNT}/${REGION} -t Application=${APPLICATION}

Deploy the AWS CDK App to compile the software binaries and build the AMI

CodeBuild is used to automate the compilation of the Open5gs and UERANSIM binaries. CodeBuild is a fully managed build service in the cloud. CodeBuild eliminates the need to provision, manage, and scale your own build servers. Once compilation is complete, the binaries are stored in an Amazon Simple Storage Service (Amazon S3) bucket. EC2 Image Builder then fetches the binaries and integrates them with further artefacts into a custom AMI ready to be deployed. EC2 Image Builder is a fully managed AWS service that helps you automate the creation, management, and deployment of customized, secure, and up-to-date server images.

The CDK App comprises two stacks. The first one to deploy CodeBuild resources, and the second one to deploy EC2 Image Builder resources.

Deploy the Stacks for CodeBuild Compilation and AMI generation
1. Synthetize the CDK App.

AMI_PIPELINE_DIR=ami_build_pipeline
cd ${AMI_PIPELINE_DIR}/cdk
npm install
cdk synth

2. Deploy the first Stack and copy the CodeBuild build specifications to Amazon S3.

cdk deploy CodebuildVpcStack, CodebuildCoreRanStack --require-approval never
bin/push_buildspecs_to_s3.sh codebuild

3. Deploy the second Stack.

bin/push_dockerfiles_to_s3.sh
cdk deploy ImageBuilderCoreRanStack --require-approval never

Deploy the AWS CDK App to build the Enclave image file and AMI
A second EC2 Image Builder pipeline is created to build the Enclave image file, store the related PCR measurement in the Systems Manager Parameter Store, and finally build an AMI containing the Enclave image file and further artefacts.

Deploy the Stacks for Enclave image file and AMI generation
1. Change directory back to the root of the source directory.

2. Synthetize the CDK App.

ENCLAVE_PIPELINE_DIR=enclave_build_pipeline
cd ${ENCLAVE_PIPELINE_DIR}/cdk
npm install
cdk synth

3. Push artefacts to Amazon S3 and deploy the Stack.

bin/push_artefacts_to_s3.sh
cdk deploy ImageBuilderEnclaveStack --require-approval never

Build the AMIs

Now that the needed infrastructure is in place, you can build the AMIs. The AMIs have to be built before you can deploy the 5G Core. Compilation of the software and creation of the AMI takes approximately 45 minutes and during this time you can proceed with the next two sections.

1. From the root of the source directory, run the following command to trigger the compilation of the software binaries and of Nitro Enclaves SDK. The Nitro Enclaves SDK is a prerequisite to build the Enclave image file.
utils/bin/trigger_artefacts_build.sh

2. You can check the status of the software binaries build processes using
utils/bin/check_artefacts_build_status.sh
This is a wrapper around the AWS CLI for EC2 Image Builder and CodeBuild that returns`SUCCEEDED` or `AVAILABLE` when the compilation terminates successfully.

3. When complete, build the AMIs. The generation of the Enclave image file takes place during the build of the AMI for the parent instance.
utils/bin/trigger_images_build.sh

4. You can check the status of the AMI build processes using
utils/bin/check_images_build_status.sh

The utils/bin/check_images_build_status.sh script displays an output similar to the following when complete:

arn:aws:imagebuilder:eu-central-1:ACCOUNT:image/arpfnitroenclavearm64/0.0.21
AVAILABLE
arn:aws:imagebuilder:eu-central-1:ACCOUNT:image/coreranarm64/0.0.27
AVAILABLE

Deploy the AWS CDK App to automate configuration management

We are using Systems Manager State Manager with associations that run Ansible playbooks to automate runtime configuration tasks of the EC2 instances. These runtime configuration tasks comprise the deployment of CloudWatch agent configuration files, the Open5gs configuration files and related scripts, and scripts to manage the lifecycle of the Enclaves.

Deploy the Stack for configuration automation
1. Synthetize the CDK App.

CONFIGURATION_DEPLOYMENT_DIR=configuration_deployment_automation
cd ${CONFIGURATION_DEPLOYMENT_DIR}/cdk
npm install
cdk synth

2. Deploy the stack.

cdk deploy ConfigurationAssociationAutomation --require-approval never

Deploy the Systems Manager Session Document

Session Manager is a Systems Manager capability. We are using Session Manager to securely access EC2 instances for development purposes and for interactive demo sessions. The session manager configuration is controlled by a session document. We deploy this session document before using Session manager.

  1. From the root of the repository, run the following command:
SESSION_DOCUMENT_NAME=SessionRegionalSettings
aws ssm create-document --content file://utils/ssm/session_manager/session_document.yaml --document-type "Session" --name ${SESSION_DOCUMENT_NAME} --document-format YAML --region ${REGION}

Deploy the AWS CDK App to deploy the 5G Core and RAN with ARPF running within Nitro Enclaves

Now that the AMI is available and the infrastructure for automation is in place, we can deploy the 5G Core, with the ARPF running within Nitro Enclaves, and the RAN.

Deploy the Stacks for Encryption and 5G Core and RAN
1. From the root of the repository, run the following command:

APPLICATION_DIR=core_ran_arpf_enclave
cd ${APPLICATION_DIR}/cdk
npm install
cdk synth

2. And push the configuration to the S3 bucket for configuration.
bin/push_configuration_to_s3.sh

3. Then deploy the stacks to instantiate the VPCs used by the 5G Core and RAN and to support encryption. This deployment creates an AWS Key Management Service (AWS KMS) key used for encryption of the subscriber database used in the ARPF.
cdk deploy EnclaveArpfVpcStack EnclaveArpfBucketStack EnclaveArpfKeyInfrastructureStack --require-approval never

4. Next you deploy only the management instance. The management instance is used to simulate a dedicated host used to perform client-side envelope encryption of the subscriber database used in the ARPF.
cdk deploy EnclaveArpfManagementStack  --require-approval never

5. Once the management host is deployed, update the AWS KMS key policy to let the management host perform encryption and decryption operation
bin/update-key-policy-for-management-host.sh

6. Then run the following script to encrypt the subscriber database on the management host, and have it pushed to an S3 bucket. This allows for the ARPF nodes to pull the encrypted database when starting the Enclave.
bin/generate_encrypted_data.sh

7. Finally, deploy the instances supporting the 5G Core and the RAN.
cdk deploy EnclaveArpfCoreRanStack --require-approval never

8. And then update the key policy once more to allow for the Enclaves to decrypt the subscriber database.
bin/update-key-policy.sh

Your 5G Core is ready to be started and experimented with!

Thanks to the state manager association deployed for parent instances of the Nitro Enclaves, the Nitro Enclave running the ARPF is automatically started when the parent instance is operational.

The install.sh and deploy.sh scripts can be used to automate the complete deployment.

Start and stop the 5G Core

For this section, make sure you change directory to core_ran_arpf_enclave/cdk.

To start the 5G Core and RAN, with the ARPF deployed in Nitro Enclaves, run the following script:

bin/run_ran_enclave_core.sh

This script starts all the 5G Core network functions (the ARPF is already running), along with the RAN gNB and UE. Once the UE is operational, an iperf3 session is started from the UE to the UPF to generate traffic. To stop the 5G Core and RAN, run the following script:

bin/stop_ran_core.sh

If you want to start the 5G Core and RAN without the ARPF deployed in Nitro Enclaves, then use the bin/run_ran_core.sh script instead.

Validate your setup

For this section, make sure you change directory to core_ran_arpf_enclave/cdk.

To get the list of running instances (AWS CLI):

  • Run
    bin/connect-to-intance-via-ssm-session.sh list
  • The output should show
    Available instances:
    amf arpf arpf management nrf ran udm upf
    There are two arpf instances because of the autoscaling group deployment.

To validate that the iperf3 session is running (console):

  • Open the CloudWatch console.
  • In the navigation pane, under Metrics, choose All metrics.
  • On the Browse tab, in the Custom namespaces section, choose **Open5g/InstanceMetrics**.
  • Then choose ImageId, InstanceId, InstanceType, interface.
  • In the search box, enter ogstun and net_bytes_recv. The Instance name column should display UPF.
  • Select the checkbox to display the metric in the graph.

The Open5gs UDM network function displays logs related to communication with the ARPF running in the Nitro Enclaves. To see the logs generated by the Open5gs network functions (console):

  • Open the CloudWatch console
  • In the navigation pane, under Logs, choose Log groups
  • In the search box, enter open5gs
  • Choose /ec2/instances/var/log/open5gs/udm.log
  • On the Log streams tab, choose the available Log stream
  • In the search box to filter events, enter Status to filter events related to communication between the UDM and the ARPF running in the Nitro Enclaves

The ARPF is running within a Nitro Enclave on the parent instances. There is a proxy, running on TCP port 8012, which allows for commands to be sent to the ARPF within the Nitro Enclave. The parent instances belong to an EC2 Autoscaling group and are reachable through an NLB, on TCP port 8012. To validate that the UDM network function can communicate with the Nitro Enclaves running in the ARPF instances behind the AWS NLB, the NLB is configured with the DNS alias arpf.local.

  • Connect to the UDM instance and run the following command:
    bin/connect-to-intance-via-ssm-session.sh udm
    You are now connected to the UDM instance.
  • To check connectivity to an ARPF running within a Nitro Enclave, run the following command

echo -n '{ "command":"ping" }' | socat - tcp-connect:arpf.local:8012

The output should show

{"Status": "success", "Response": "pong"}

  • To disconnect from the UDM instance, enter Ctrl+D Ctrl+D (twice)

Diving deeper on the Nitro Enclaves ARPF implementation

The ARPF is running within Nitro Enclaves on the parent instances. If you followed the preceding instructions, then you should have two instances running. Use the following command to list available nodes (AWS CLI):

  • Run
    bin/connect-to-intance-via-ssm-session.sh list | grep arpf
  • The output should show two ARPF in the list:
    arpf arpf management nrf ran udm upf

Each parent instance runs four components:

1. An AWS KMS proxy to allow for the Nitro Enclave to reach AWS KMS.

2. A series of shell commands to launch the Nitro Enclave containing the ARPF.

3. An ARPF proxy, allowing for the 5G Core UDM to reach the ARPF Nitro Enclave from the outside. This ARPF proxy runs on TCP port 8012.

4. A health check proxy that exposes an HTTP server on port 8080. This proxy supports the AWS NLB health-check mechanism.

We are using tmux, a popular terminal multiplexer, to launch and manage the components.

You can connect to one of the instances to explore further. To connect to the first instance (indexed by 0), use the following command (AWS CLI):

  • Run
    bin/connect-to-intance-via-ssm-session.sh arpf 0
    You can replace the index 0 by index 1 to connect to the other instance.
  • You should be connected to a shell
    [ec2-user@ip-10-11-13-186 ~]$
    The shell prompt is different because the instance you setup does not have the same IP address.
  • To see the tmux session currently running, run
    tmux ls

The output should be (the name of the session is “arpf” and the creation date is different)
arpf: 5 windows (created Wed Oct 18 09:38:22 2023)

  • Within the arpf tmux session, to list the currently running windows, run
    tmux list-windows -t arpf
    The output should be similar to
    0: shell @0
    1: KMS proxy @1
    2: ARPF Enclave-@2
    3: Enclave proxy @3
    4: Enclave status frontend @4
  • To access the tmux window running the Enclave health-check proxy, run
    tmux a -t arpf:4
    To access any other window, change the digit after arpf: e.g., to access the window running the AWS KMS proxy, run
    tmux a -t arpf:1
    Or simply run
    tmux a
    to access the last selected window.

Once you are within a tmux window, enter Ctrl+b n to change to the next window, Ctrl+b p to change to the previous window, and enter Ctrl+b d to leave tmux.

Validating high-availability
We combine the golden AMI pattern with Systems Manager State Manager with associations that run Ansible playbooks to automate starting a new parent instance and the ARPF Nitro Enclave. The association starts the tmux session once the parent instance is available.

Thanks to the AWS NLB health-check capability, a new parent instance is started if the health-check fails or if the parent instance fails (or is terminated). You can trigger a failed health-check in two ways: you can stop the health-check proxy script, or you can terminate the Nitro Enclave.

Here is how to terminate the Nitro Enclave to trigger a failed health-check (AWS CLI):

  • Connect to the first parent instance. Run
    bin/connect-to-intance-via-ssm-session.sh arpf 0
  • Access the tmux window used to start the Nitro Enclave. Run
    tmux a -t arpf:2
  • Terminate the Nitro Enclave. Run

nitro-cli terminate-enclave –enclave-name arpf

  • Switch to the tmux window for the health-check proxy. Enter Ctrl-b 4. Observe how the status changes from 200 to 503.
  • Leave tmux and close the session. Enter Ctrl-b d to leave tmux. Enter Ctrl-d Ctrl-d to close the session to the parent Instance.
  • After one minute, display the list of autoscaling activities. Run

aws autoscaling describe-scaling-activities  --query Activities[:2].[Description,Cause]

You should see a mention that one instance was terminated and the cause is described to be an Elastic Load Balancing (ELB) health-check failure.

If you want to test another failure-scenario, then connect to one of the parent instances and stop the health-check proxy.

A secure Nitro Enclaves service

Our sample code provides an alternative implementation of the highly-available ARPF Nitro Enclave. Instead of using an AWS NLB, the alternative uses AWS Application Load Balancer (ALB). Communication with the ARPF enclave is available through a REST API. This implementation exposes an ARPF service.

Deploy (AWS CLI):

  • From the root of the repository, run the following command:
    APPLICATION_DIR=core_ran_arpf_enclave
    cd ${APPLICATION_DIR}/cdk
  • Deploy the alternative implementation, run
    cdk deploy EnclaveArpfArpfEnclaveShardsStack --require-approval never
  • And update the key policy
    bin/update-key-policy-for-shards.sh

The alternative implementation can be tested from the 5G Core UDM node. The ARPF service is available on port 8080 at shards.local. The implementation also demonstrates the use of sharding. There are four parent instances deployed and the space of subscriber identifiers is split among two pairs of instances. To test the ARPF service (AWS CLI):

  • Connect to the UDM. Run
    bin/connect-to-intance-via-ssm-session.sh udm
  • To test connectivity to the service, run
    curl shards.local:8080/status
  • A Python script is available to test the API endpoints of the ARPF service. To replicate the previous command, run
    query_shards.py status
  • To obtain authentication data in the form of an authentication vector for subscriber number 999700000000090, run
    query_shards.py av -i 999700000000090
    Try different SUPI/IMSI values between 999700000000001 and 999700000000200 to observe that different pairs of instances handle the lower and upper half of the 200 subscriber identifiers supported.

Cleaning up

From the root of the Git repository, launch the following script

./uninstall.sh

This script removes all resources in your AWS account.

Conclusion

In this post, you deployed a high-availability deployment of AWS Nitro Enclaves for a 5G mobile network scenario. This deployment model provides a cloud-native custom HSM. The enclaves protect the 5G network shared secret keys and implement the 3GPP ARPF cryptographic capabilities. Because the enclave supports any trusted and verified software, this eases implementation. This allows for network operators to forego the deployment of appliance-based HSM on-premises. High-availability is provided by deploying a fleet of Amazon EC2 instances with enclaves, across availability zones. Our prototype implementation does not implement the client retry pattern on the UDM. We welcome pull requests and suggestions.

Thanks to Nitro Enclaves, network operator can forego the use of an HSM on-premises without compromising security.

You can also enhance the ALB-based ARPF service to support HTTPS, or expose it through an AWS private link.

There are further 5G mobile network procedures that can be protected and run within a Nitro Enclave. For example, you can try to implement the subscriber permanent identifier (SUPI) de-concealment procedure (see 3GPP 33.501). It is also applicable in the context of SIM provisioning to replace on-premises custom HSM.

And for more information on Nitro Enclaves, see the official product documentation or the introductory videos on YouTube.

Ruben Merz

Ruben Merz

Ruben Merz is a Principal Solutions Architect in the AWS Industries Telecom Business Unit, He works with global telecom customers and regulated industries to help them transform with AWS.