Containers

Exposing Kubernetes Applications, Part 3: NGINX Ingress Controller

Introduction

The Exposing Kubernetes Applications series focuses on ways to expose applications running in a Kubernetes cluster for external access.

In Part 1, we explored Service and Ingress resource types that define two ways to control the inbound traffic in a Kubernetes cluster. We discussed handling of these resource types via Service and Ingress controllers, followed by an overview of advantages and drawbacks of some of the controllers’ implementation variants.

In Part 2, we had a walkthrough of the setup, configuration, possible use cases, and limitations of the AWS open-source implementation of an Ingress controller, AWS Load Balancer Controller.

In this post, Part 3, we focus on an additional open-source implementation of an Ingress controller: NGINX Ingress Controller. We walk through some of its features and the ways it differs from its AWS Load Balancer Controller.

NGINX Ingress Controller Architecture

In Part 1, we described an Ingress controller type that uses an in-cluster Layer 7 reverse proxy, as represented in the following diagram:

Ingress controller implementation using an in-cluster reverse proxy

NGINX Ingress Controller’s implementation follows the above architecture:

NGINX Ingress Controller implementation of an in-cluster reverse proxy implement

The controller deploys, configures, and manages Pods that contain instances of nginx, which is a popular open-source HTTP and reverse proxy server. These Pods are exposed via the controller’s Service resource, which receives all the traffic intended for the relevant applications represented by the Ingress and backend Services resources. The controller translates Ingress and Services’ configurations, in combination with additional parameters provided to it statically, into a standard nginx configuration. It then injects the configuration into the nginx Pods, which route the traffic to the application’s Pods.

The NGINX Ingress Controller Service is exposed for external traffic via a load balancer. That same Service can be consumed internally via the usual <service-name>.<namespace-name>.svc.cluster.local cluster DNS name.

Walkthrough

Now that we understand how NGINX Ingress Controller operates, it’s time to put it to work.

Prerequisites

1. Obtain Access to an AWS Account

You will need an AWS account and ability to communicate with it from your terminal, using AWS Command Line Interface (AWS CLI) and similar tools.

In the following code examples, we encounter several tokens that can’t be given synthetic values (e.g.,, those referring to AWS account ID or Region). These should be replaced with values that match your environment.

2. Create the Cluster

We will use eksctl to provision an Amazon EKS cluster, which in addition to creating the cluster itself also provisions and configures the necessary network resources (a Virtual Private Cloud — VPC, subnets, and security groups).

The following eksctl configuration file defines the Amazon EKS cluster and its settings:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: nginx-ingress-controller-walkthrough
  region: ${AWS_REGION}
  version: '1.23'
iam:
  withOIDC: true
managedNodeGroups:
  - name: main-ng
    instanceType: m5.large
    desiredCapacity: 1
    privateNetworking: true

Put the code above in the config.yml file.

Verify existence of the following environment variables: AWS_REGION and AWS_ACCOUNT and create the cluster:

envsubst < config.yml | eksctl create cluster -f -

The walkthrough uses Amazon EKS platform version eks.3 for Kubernetes version 1.23.

For brevity, the configuration above doesn’t consider many aspects of Kubernetes cluster provision and management (e.g., security and monitoring). For more information and best practices please explore Amazon EKS and eksctl documentation.

Verify that the cluster is up and running:

kubectl get nodes
kubectl get pods -A

This should return a single Amazon EKS node and four running Pods.

3. Install Helm

We will use Helm, a popular package manager for Kubernetes, to install and configure the controller. Follow Helm installation instructions here.

Install the NGINX Ingress Controller

1. Install the Controller using Helm

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --version 4.2.3 \
    --namespace kube-system \
    --set controller.service.type=ClusterIP

kubectl -n kube-system rollout status deployment ingress-nginx-controller

kubectl get deployment -n kube-system ingress-nginx-controller

We are setting the controller’s Service to ClusterIP to avoid re-creating the load balancer as we change various configuration parameters of the controller during the walkthrough. We’ll address load balancer creation towards the end of the article.

Deploy the Testing Services

1. Create the Services Namespace

kubectl create namespace apps

2. Create the Service Manifest File

Place the following code in the service.yml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${SERVICE_NAME}
  namespace: ${NS}
  labels:
    app.kubernetes.io/name: ${SERVICE_NAME}
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ${SERVICE_NAME}
  replicas: 1
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ${SERVICE_NAME}
    spec:
      terminationGracePeriodSeconds: 0
      containers:
        - name: ${SERVICE_NAME}
          image: hashicorp/http-echo
          imagePullPolicy: IfNotPresent
          args:
            - -listen=:3000
            - -text=${SERVICE_NAME}
          ports:
            - name: app-port
              containerPort: 3000
          resources:
            requests:
              cpu: 0.125
              memory: 50Mi
---
apiVersion: v1
kind: Service
metadata:
  name: ${SERVICE_NAME}
  namespace: ${NS}
  labels:
    app.kubernetes.io/name: ${SERVICE_NAME}
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: ${SERVICE_NAME}
  ports:
    - name: svc-port
      port: 80
      targetPort: app-port
      protocol: TCP

The Service above, based on the http-echo image, answers any request with the name of the Service, as defined above by the ${SERVICE_NAME} token. We also define a single replica for simplicity.

3. Deploy and Verify the Services

Execute the following commands (we use these Services throughout the post):

SERVICE_NAME=first NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=second NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=third NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=fourth NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=error NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=another-error NS=apps envsubst < service.yml | kubectl apply -f -

Verify that all the resources are deployed:

kubectl get pod,svc -n apps

A screenshot of a list of Pods and Services in the applications’ namespace

Deploy a Simple Ingress

1. Create the Ingress Manifest file and Deploy the Ingress

Place the following code into the ingress.yml file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ${NS}-ingress
  namespace: ${NS}
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: /first
            pathType: Prefix
            backend:
              service:
                name: first
                port:
                  name: svc-port
          - path: /second
            pathType: Prefix
            backend:
              service:
                name: second
                port:
                  name: svc-port

In the same manner we’ve seen with AWS Load Balancer Controller, we target the NGINX Ingress Controller by using the ingressClassName property set to nginx, which is the name of the default IngressClass installed with the controller.

Deploy the Ingress by running the following:

NS=apps envsubst < ingress.yml | kubectl apply -f -

After a short delay (IP address binding may take a bit longer) we should be able to see the state of the Ingress resource:

kubectl get ingress -n apps

The output should be similar to:

A screenshot of the list of Ingress objects in the applications’ namespace

The ADDRESS and PORT columns above are set to those of the controller’s Service.

Since we configured the controller to create its Service with the type of ClusterIP we need to create a way to communicate with it by setting up port-forwarding to the service:

kubectl port-forward -n kube-system svc/ingress-nginx-controller 8080:80

2. Test the Ingress

Now, we can send requests to the controller service:

curl -sS localhost:8080/first
curl -sS localhost:8080/second
curl -sS localhost:8080/third

The following outcome indicates that the Ingress resource was deployed and configured correctly:

A screenshot of a 404 HTTP response from the NGINX server

A Word on IngressClass

As we mentioned, a default IngressClass named nginx is installed alongside the controller and it should look like this:

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx
  labels:
    app.kubernetes.io/name: nginx
  ...
spec:
  controller: k8s.io/ingress-nginx

In contrast to AWS Load Balancer Controller, NGINX Ingress Controller doesn’t support IngressClass parameters.

We can define an IngressClass to be the default one by adding the ingressclass.kubernetes.io/is-default-class: "true" annotation or define the one installed with the controller to be the default:

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --namespace kube-system \
    --set controller.ingressClassResource.default=true \
    ...

Default Backend and Error Handling

We have seen that when we send a request to a path that isn’t handled by one of the Ingress resources, nginx responds with a 404. This response comes from the default backend installed with the controller. One way of customizing it is to set controller.defaultBackend configuration property, for example via the Helm’s values.yml file, which is shown later in this post. Another way is to set an nginx.ingress.kubernetes.io/default-backend annotation on the Ingress resource.

Finally, we can configure it according to the specification, which is shown next.

1. Update and Deploy the Ingress

Update the ingress.yml file with the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ${NS}-ingress
  namespace: ${NS}
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: error
      port:
        name: svc-port
  rules:
    - http:
        paths:
          - path: /first
            pathType: Prefix
            backend:
              service:
                name: first
                port:
                  name: svc-port
          - path: /second
            pathType: Prefix
            backend:
              service:
                name: second
                port:
                  name: svc-port

Deploy:

NS=apps envsubst < ingress.yml | kubectl apply -f -

2. Test the Ingress

Now, we can send requests again:

curl -sS localhost:8080/first
curl -sS localhost:8080/second
curl -sS localhost:8080/third

This will work as expected, with the default backend:

A screenshot of the responses of the Services behind the Ingress

Multiple Ingress Resources

Often there are multiple Ingress resources that may belong to different teams or separate parts of a larger application. They need to be developed and deployed separately, but do not require different configurations and can be handled by a single controller installation.

NGINX Ingress Controller supports merging of Ingress resources, but without being able to specifically define the ordering and grouping of these resources, as we’ve seen with AWS Load Balancer Controller.

Host-based Routing

So far, all examples assumed that all requests are routed to the same domain and the Ingress resources are merged under the same *.* host. You can also explicitly define which Services are served under which domains and segment them (and their merging) under the host setting in the Ingress.

1. Update and Deploy the Ingress

Update the ingress.yml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ${NS}-ingress
  namespace: ${NS}
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: error
      port:
        name: svc-port
  rules:
    - host: a.example.com
      http:
        paths:
          - path: /first
            pathType: Prefix
            backend:
              service:
                name: first
                port:
                  name: svc-port
    - host: b.example.com
      http:
        paths:
          - path: /second
            pathType: Prefix
            backend:
              service:
                name: second
                port:
                  name: svc-port

Run:

NS=apps envsubst < ingress.yml | kubectl apply -f -

2. Test the Ingress

We can simulate requests to different domains with curl:

curl localhost:8080/first -H 'Host: a.example.com'
curl localhost:8080/second -H 'Host: b.example.com'
curl localhost:8080/first -H 'Host: b.example.com'
curl localhost:8080/first -H 'Host: b.example.net'

The output should be as follows:

A screenshot of the responses of the Services behind the Ingress

We expect the last two requests to be routed to the default backend, as one is sent to a path that isn’t defined under that host and the other provides a non-existent host value.

Pointing the DNS records for a.myapp.com and b.myapp.com to the NGINX Ingress Controller Service allows us to handle both hosts. To complete this task, we expose the Service to the external traffic (e.g., via an external load balancer). We discuss this in detail later in this post.

Ingress Path Types, Regex and Rewrite

So far, we’ve defined the path types for our Ingress rules to be Prefix. We can also set them to be Exact instead, using regex in these paths or define rewrite rules.

1. Update and Deploy the Ingress

Let’s change the Ingress definition in ingress.yml file and redeploy:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ${NS}-ingress
  namespace: ${NS}
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  defaultBackend:
    service:
      name: error
      port:
        name: svc-port
  rules:
    - http:
        paths:
          - path: /first/(.*)/foo
            pathType: Prefix
            backend:
              service:
                name: first
                port:
                  name: svc-port

The nginx.ingress.kubernetes.io/rewrite-target annotation defines which of the capturing groups, as defined in the path of our rules, should be sent to the corresponding Service. So /$2 sends the contents of the second capturing group as the path of the request to the Service.

Deploy the Ingress:

NS=apps envsubst < ingress.yml | kubectl apply -f -

2. Test the Ingress

Execute:

curl -sS localhost:8080/first
curl -sS localhost:8080/first/foo
curl -sS localhost:8080/first/bar
curl -sS localhost:8080/first/bar/foo

Now we’d get the following results:

A screenshot of Service responses

Exposing NGINX Ingress Controller via a Load Balancer

Using In-tree Service Controller

The simplest way is to let the in-tree controller, which we discussed in Part 1, to handle the Service. To do that we could set the Service type to LoadBalancer, which would provision an AWS Classic Load Balancer:

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --namespace kube-system \
    --set controller.service.type=LoadBalancer \
    ...

We could also change that to the recommended, and more modern AWS Network Load Balancer instead:

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --namespace kube-system \
    --set controller.service.type=LoadBalancer \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-type"="nlb" \
    ...

We could also use Helm’s values.yml file to provide these configuration parameters in a more comfortable manner, to the same effect. We see an example of such usage in the next section.

Usage with AWS Load Balancer Controller

If we wanted more control over the provisioned Network Load Balancer we created for the Service, we could install a Service controller. AWS Load Balancer Controller is the recommended choice.

While AWS Load Balancer Controller also handles Ingress resources, it does so for a different Ingress class — alb, so there should be no clashes with the NGINX Ingress Controller.

Provision AWS NLB for the NGINX Ingress Controller

We already discussed installation of the controller in Part 2 of the series, so this should be familiar.

1. Create AWS Load Balancer Controller Identity and Access Management (AWS IAM) Policy

Create AWSLoadBalancerControllerIAMPolicy using the following instructions (only #2 and #3) that setup IAM Roles for Service Accounts to provide permissions for the controller.

Note that OIDC IAM provider registration is done automatically by eksctl using the cluster configuration above and does not need to be explicitly handled.

2. Create a Service Account for AWS Load Balancer Controller

In Part 2 of the series, we created a service account for the AWS Load Balancer Controller as part of the eksctl cluster creation. This time we’ll create it separately:

eksctl create iamserviceaccount \
    --cluster=nginx-ingress-controller-walkthrough \
    --name=aws-load-balancer-controller \
    --namespace=kube-system \
    --attach-policy-arn=arn:aws:iam::${AWS_ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy \
    --approve

3. Install the CRDs

The following installs CustomResourceDefinitions necessary for the controller to function.

kubectl apply -k \
    "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"

4. Install the Controller Using Helm

helm repo add eks https://aws.github.io/eks-charts

helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n kube-system \
    --set clusterName=nginx-ingress-controller-walkthrough \
    --set serviceAccount.create=false \
    --set serviceAccount.name=aws-load-balancer-controller

kubectl -n kube-system rollout status deployment aws-load-balancer-controller

kubectl get deployment -n kube-system aws-load-balancer-controller

5. Redeploy NGINX Ingress Controller

Change the way we provide configuration for the controller’s Helm chart and create a values.yml file:

controller:
  service:
    type: LoadBalancer
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-name: apps-ingress
      service.beta.kubernetes.io/aws-load-balancer-type: external
      service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
      service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
      service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: 10254

Here we change the Service type, define the name for the load balancer (which will be a Network Load Balancer), make it internet-facing so we can access it, define its target type to be ip, and configure the health check for the NGINX server.

For more information on AWS Load Balancer Controller Service annotations see here.

Redeploy the controller:

helm upgrade -i ingress-nginx ingress-nginx/ingress-nginx \
    --version 4.2.3 \
    --namespace kube-system \
    --values values.yml
    
kubectl -n kube-system rollout status deployment ingress-nginx-controller

kubectl get deployment -n kube-system ingress-nginx-controller

6. Test the Ingress

Note that we are using the same Ingress definition we used to illustrate the path types and rewrite features of the controller.

Save the Network Load Balancer URL:

export NLB_URL=$(kubectl get -n kube-system service/ingress-nginx-controller \
    -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

After a couple of minutes, the load balancer is provisioned and we can send requests:

curl ${NLB_URL}/first
curl ${NLB_URL}/first/foo
curl ${NLB_URL}/first/bar
curl ${NLB_URL}/first/bar/foo

This produces the same results we received before, as expected:

A screenshot of the Services responses

Attaching the NGINX Ingress Controller to an existing Load Balancer

In addition to the Service controller-driven way described previously, we can also provision an AWS Application or Network Load Balancer via AWS CLI or Infrastructure-as-Code tools (e.g., AWS CloudFormation, AWS CDK, or Terraform).

We then can use TargetGroupBinding, which is a part of the custom resource definitions installed above. The resource binds a Service (selected by its name and namespace) to a load balancer’s target group (selected by its Amazon Resource Name (ARN)) by registering the Service’s Pods (or underlying instances) as targets in that group.

This may be useful when the load balancer is used by other compute resources. In rare cases, it’s also useful if you need to set an Application Load Balancer to utilize one of its unique features on top of the in-cluster Layer 7 proxy.

Multiple Ingress Controllers

In some cases, we can configure multiple instances of the NGINX Ingress Controller in the cluster to handle different Ingress resources. This is achieved by providing the controllers with different configurations. In contrast to AWS Load Balancer Controller, the NGINX Ingress Controller supports such a setup.

The following example is of the latter controller, with a unique name, Ingress class, and controller value reflected in the Ingress class:

helm upgrade -i ingress-nginx-one ingress-nginx/ingress-nginx \
    --namespace kube-system \
    --set controller.ingressClassResource.controllerValue=k8s.io/ingress-nginx-one \
    --set controller.ingressClassResource.name=nginx-one \
    ...

Now Ingress resources can target this controller via their ingressClassName by setting it to nginx-one.

Cleanup

This concludes our walkthrough. To remove the resources that you created during the walkthrough you can execute the following:

helm uninstall -n kube-system ingress-nginx

helm uninstall -n kube-system aws-load-balancer-controller

envsubst < config.yml | eksctl delete cluster -f -

aws iam delete-policy --policy-arn arn:aws:iam::${AWS_ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy

Conclusion

During this series, we showcased various Ingress controllers, while trying to highlight some of the things they do differently.

NGINX Ingress Controller harnesses the power of nginx. While this is a more flexible controller, it comes with the drawback of maintenance, patching, and scaling of a component on the data path of the request.

In contrast, AWS Load Balancer Controller outsources the burden to the highly available, scalable, and battle-tested managed service in AWS Elastic Load Balancing, relying on its feature set to provide the needed configuration options.

The choice between extreme flexibility and operational simplicity is based on the requirements of the deployed applications. This, along with other Service and Ingress controllers we haven’t covered in this series, should provide a plethora of options to expose these applications to external traffic.

Dmitry Nutels

Dmitry Nutels

Dmitry Nutels is a Senior Solutions Architect at Amazon Web Services (AWS). He has 17+ years of experience with a wide spectrum of software engineering disciplines and has a hard time choosing a favorite one. For the last several years he has been focusing on development and operational aspects of running containerized applications in the cloud.

Tsahi Duek

Tsahi Duek

Tsahi Duek is a Principal Container Specialist Solutions Architect at Amazon Web Services. He has over 20 years of experience building systems, applications, and production environments, with a focus on reliability, scalability, and operational aspects. He is a system architect with a software engineering mindset.