AWS Open Source Blog
Open source mobile core network implementation on Amazon Elastic Kubernetes Service
As introduced in Amazon Web Services (AWS) whitepapers, Carrier-grade Mobile Packet Core Network on AWS and 5G Network Evolution with AWS, implementing 4G Evolved Packet Core (EPC) and 5G Core (5GC) on AWS can bring a significant value and benefit, such as scalability, flexibility, and programmable orchestration, as well as automation of the underlying infrastructure layer.
This blog post focuses on practical implementation steps for creating a 4G core network using the open source project Open5gs.
In addition to showing the benefit of easy installation steps, we introduce how the following AWS services can help the mobile packet network operate efficiently in the cloud environment: Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Route 53 (DNS service), Amazon DocumentDB, Amazon Elastic Container Registry (Amazon ECR), AWS CloudFormation, Amazon CloudWatch, and AWS Lambda.
This generalized example of open source-based 4G core network implementation gives guidance for mobile network function developers and also can be a relevant tool for developers of orchestration, service assurance, and Operation Support System (OSS) solutions that require a general example of mobile packet core network running on AWS.
Time to read | About 10-15 minutes |
Time to complete | About 45-60 minutes |
Cost to complete (estimated) | $489 (for a month, on-demand instance cost based) |
Learning level | Advanced (300) |
Services used | AWS CloudFormation, Amazon Elastic Kubernetes Service, Amazon DocumentDB, AWS Lambda, Amazon CloudWatch |
Solution overview
In this implementation, we have chosen Open5gs as a sample mobile packet core application. The Open5gs is an open source project that provides 4G and 5G mobile packet core network functionalities for building a private LTE/5G network under the GNU AGPL 3.0 license. Currently, it supports 3GPP Release 16 with providing 5G Core (AMF, SMF+PGW-c, UPF+PGW-u, PCF, UDR, UDM, AUSF, NRF) network functions and Evolved Packet Core (MME, SGW-c, SGW-u, HSS, and PCRF) network functions.
Among the components in Open5gs, the network function applications in the following table are only used for a 4G EPC network demonstration, with having a 3GPP logical interface in the diagram. Note that even the Network Repository Function (NRF) is a 5G-only network function. It is introduced to use SMF and UPF, which play a role as PGW-c and PGW-u in the Open5gs project.
Network Function | Role |
MME | Mobility Management Entity |
HSS | Home Subscriber Server |
PCRF | Policy and Charging Rules Function |
SGW-c | Serving Gateway Control Plane |
SGW-u | Serving Gateway User Plane |
SMF+PGW-c | Session Management Function + PDN Gateway Control Plane |
UPF+PGW-u | User Plane Function + PDN Gateway User Plane |
NRF | Network Repository Function (it is only for NF registration of 5G functions) |
Web-UI | GUI to configure subscriber and its profile for HSS/PCRF |
If we use container-based network functions on Kubernetes (K8s), we can generally standardize a deployment process of these network functions in the flow of VPC creation→EKS Cluster and worker node creation→Helm deployment→CNF configuration, as in the following diagram, which can be automated with various automation tools and scenario.
In this example, we use a CloudFormation to create an Amazon Virtual Private Cloud (VPC), an Amazon EKS cluster, and two worker node groups (one for the 3GPP control plane, the other for the 3GPP user plane). Importantly, when we deploy these types of open source EPC/5GC on EKS, because they are mostly using multiple network interfaces to serve all different protocols at each interface with having network separation, we have to leverage the Multus CNI Plugin. As guided in AWS GitHub, we can automate this process through AWS Lambda function and Amazon CloudWatch Event Rule. The bottom line is that two AWS CloudFormation templates create the following resources:
- Infrastructure creation template
- EpcVpc: A VPC that will be used for the deployment.
- PublicSubnet1/2: These subnets will host the bastion host for kubectl command run with having public internet access. Also, this will host the NAT-GW to provide internet access for the private subnets.
- PrivateSubnetAz1/2: Subnets for the EKS control-plane in AZ1 and AZ2.
- MultusSubnet1Az1: The first subnet that Multus will use to create secondary interfaces in the EPC control plane pods.
- MultusSubnet2Az1: The second subnet that Multus will use to create secondary interfaces in the EPC user plane pods.
- EksCluster: EKS cluster that will host network functions.
- DocumentDBCluster: For profile store of subscribers, Open5gs originally used MongoDB for HSS and PCRF. In this implementation, Amazon DocumentDB is facilitated because DocumentDB has full compatibility with MongoDB.
- Route53 Private Hosted Zones: For the discovery of service interfaces, such as S6a, Gx, S11, S5-c/u IP addresses, Amazon Route 53 is facilitated as one central DNS.
- EKS worker node group creation template
- Worker node group for control plane network functions, such as MME, SGW-c, SMF, etc., with additional control plane subnet network.
- Worker node group for user plane network functions, such as SGW-u and UPF, with additional control plane subnet and user plane subnet networks.
- Lambda function for attaching additional Multus subnet networks to worker node groups.
- CloudWatch Event Rule for monitoring instance scaling up and down to trigger Lambda hook to attach additional Multus networks to worker node groups.
Additionally, two more controllers have been developed and introduced for the further steps of automation.
- DNS update controller: While we use Amazon Route 53 to resolve the service IP given to the Multus interface, we also created a controller to register this service IP to respective Route 53 private hosted zone automatically. Each EPC service interface uses a separate DNS private hosted zone, created by the open5gs-infra CFN template.
- Multus IP update controller: The other controller is used to associate the Multus secondary IPs to the EC2 instance in which the pod is running. The controller listens for pods with designated annotations, and it searches for the secondary IPs and then calls Amazon EC2 API to associate the IP at the Multus interface of the pod to the respective ENI of the host instance. It also disassociates the IP from the host ENI when the POD gets deleted.
After a successful deployment of Open5gs, the functionality of the 4G core network can be tested with other tester or simulators. In this article, we have used srsLTE simulator as an example, but it can be chosen according to the user’s preference.
Walkthrough
Summary of installation steps:
- Run the CloudFormation for infra creation (
open5gs-infra.yaml
). - Bastion host configuration and K8s ConfigMap update.
- DocumentDB initialization.
- CoreDNS ConfigMap update to use Route 53 for 3GPP service interfaces.
- Run the CloudFormation for Multus worker node group creation (
open5gs-worker.yaml
). - DNS controller and Multus-IP update controller deployment for the automation.
- Run shell script for cluster initialization (setting up namespace, etc.).
- Helm installation for all network functions.
Refer to the GitHub repo throughout this tutorial.
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account.
- Download GitHub repo to your local machine to build images.
- You have to compile Docker images of Open5gs and DNS/Multus-IP controllers and then upload them to your ECR.
- Container images: Docker files for the application components are in the Dockerfiles sub-folder in the GitHub repo for each processor architecture (ARM-Architecture and x86-Architecture folders) in the GitHub repository. Especially ARM-based files can be used with AWS Graviton2 instance, which can deliver the best price performance.
- Note that Dockerfiles for Open5gs components were created from the master because of a glitch that occurs when Open5gs is deployed using containers in version 2.0.22. The commit that was used is 41fd851 or you can use a version that is higher than v2.0.22.
- As guided in the GitHub for Multus in Amazon EKS, please locate the lambda_function.zip file in the repo at your local Amazon S3 bucket. You must use this S3 bucket name as one of the input parameters when running the CloudFormation of Infra creation that will be described in the procedure section.
- Basic understanding of AWS services, such as CloudFormation, VPC, and EKS.
- Please be mindful that some services such as EKS and DocumentDB used in this example incur a service charge.
Detailed implementation steps
You can refer to the service documentation topics for basic procedures or more information.
Run the CloudFormation for infra creation (open5gs-infra.yaml)
- Log in to AWS Console, CloudFormation service.
- Run the infra template.
Bastion host configuration and K8s ConfigMap update
- Kubectl installation as outlined in the user guide.
- Helm version3 installation is also required.
- AWS credential configuration at the instance as outlined in the user guide.
- Update kubeconfig at bastion to communicate with the created EKS cluster.
aws eks update-kubeconfig --name eks-Open5gsInfra
- Run ConfigMap update, so that Lambda SAR application can work for worker node group’s automatic joining.
- Having Git clone of the repo will help you later when executing Helm installation at this bastion host.
DocumentDB initialization
- Log in to AWS Console, DocumentDB service.
- The DocumentDB cluster needs to be initialized before Open5gs can use it. This is done by creating an “open5gs” database in the cluster, which can be done via the bastion host. More information on how to install the Mongo client can be found in the documentation. To create a database, please refer to the basics guide.
CoreDNS ConfigMap update for 3GPP service interfaces in Route 53
- Update the cluster coreDNS ConfigMap with the Route 53 zones that were created by the CloudFormation template. Below is the sample entry that needs to be added (the zone ID should be replaced with yours). Note that coreDNS pods need to be restarted for the Route 53 configuration to take effect.
Run the CloudFormation for worker node group creation (open5gs-worker.yaml)
- Log in to AWS Console, CloudFormation service.
- Run the worker node group template. At this point, you’ll need to specify stack name you have used for infrastructure creation so that it can be properly referred to.
- Optional: If worker node groups don’t get joined to the EKS cluster, then manually update aws-auth ConfigMap so that the EKS cluster control plane can register the worker nodes. (This step is usually not required if a step of Bastion Host Configuration-4 has been done properly.)
Staging environment
- Edit controllers/deployments/aws-secondary-int-controller-deployment.yaml, controllers/deployments/svc-watcher-route53-deployment.yaml files to point to ECR repo of your controller image, which has been done in the prerequisite step.
- Run the ./cluster_initializer.sh (this must be done in the root folder of the repo) to install the prerequisite Kubernetes resources, such as the Open5gs namespace, Multus-daemonset, Multus network attachment definitions, service discovery, and secondary interface controllers. The service discovery and secondary interface controllers are installed in the kube-system namespace. You must run this script before installing the Open5gs via the Helm chart.
- Install CloudWatch Container Insight for the container monitoring and log collection. For installation details, refer to the setup guide.
Helm deployment
- Edit image repo to be pointing your ECR images in values.yaml (open5gs.image.repository, open5gs.image.tag, webui.image.repository, webui.image.tag).
- Install Helm chart with the following command:
helm -n open5gs install -f values.yaml epc-core ./
- Wait for all the pods to be running state; this can take around 5-10 minutes. During this time, some pods will be restarted more than once, which is expected behavior during Route 53 and IP update.
Verifying the whole setup
- To test the environment, we can use any LTE UE/eNB emulator from open source or AWS partners.
- In case of using srsLTE, we can verify the below result with the EPC core network created on EKS. (Note that the simulator must have matching configuration of subscriber profile (IMSI, OPc, Kval), MCC, MNC, and APN configurations with one configured to the core network on EKS).
Clean up
To avoid incurring future charges, we can delete all resources created using CloudFormation service. We can go back to AWS console, CloudFormation service, and delete stacks (in the order of worker-node stack and infra stack) one by one.
Conclusion
In this blog post, we’ve shown the benefit and power of using AWS for implementing a mobile packet core network by demonstrating how to easily set up the environment without any hardware, additional database, and underlying infrastructure preparation. This standardized process can be further automated with the use of AWS Cloud Development Kit (AWS CDK) and AWS CodePipeline or with third-party tools and an orchestrator through the API integration.