Containers

Implement AWS IAM authentication with Amazon VPC Lattice and Amazon EKS

Introduction

Amazon VPC Lattice is a fully managed application networking service built directly into the AWS network infrastructure that you use to connect, secure, and monitor all of your services across multiple accounts and virtual private clouds (VPCs).

With Amazon Elastic Kubernetes Service (Amazon EKS), customers can use Amazon VPC Lattice through the use of AWS Gateway API controller, an implementation of the Kubernetes Gateway API. Using Amazon VPC Lattice, Amazon EKS customers can set up cross-cluster connectivity with standard Kubernetes semantics in a simple and consistent manner.

Meanwhile, certain customers expressed a desire to enhance application layer security. The design of Amazon VPC Lattice is secure by default because it requires you to be explicit about what services you want to share and which VPCs you want to provide access to.

Amazon VPC Lattice provides a 3-layer framework that lets you implement a defense-in-depth strategy at multiple layers of the network. These layers include VPC association with a service network, security groups, network access control lists (ACLs), and AWS Identity and Access Management (AWS IAM) authentication policy, so you are able to configure Amazon VPC Lattice to meet your security and compliance objectives.

Both the security group on the VPC to service network association and the auth policy are optional. You can associate a service network to a VPC without configuring a security group and leave the auth type set to NONE to not use an auth policy.

In this post, we’ll focus on the third layer to apply VPC Lattice auth policy on service network and individual services. Typically the auth policy on the service network is operated by the network or cloud administrator, and they implement coarse-grained authorization. For example, allowing only authenticated requests from a specified organization in AWS Organizations. An auth policy on the service lets the service owner set fine-grained controls that might be more restrictive than the coarse-grained authorization rules that the network or cloud administrator applied at the service network level. Using auth policies, customers are able to define who, can perform which actions, to which services without changing code of their applications.

In our implementation, we demonstrate how to:

  • Build an Amazon VPC Lattice service network on Amazon EKS and enable auth policies on Amazon VPC Lattice services.
  • Build a solution to automatically enable the service caller to make HTTP requests to Amazon VPC Lattice services with AWS IAM authentication, using a sidecar and init container pattern in Amazon EKS and Amazon VPC Lattice. No source code changes for caller apps is required.
  • Verify that the service caller will be able to connect multiple services in the Amazon VPC Lattice Service Network.

Solution overview

Amazon VPC Lattice integrates with AWS IAM to give you the same authentication and authorization capabilities you are familiar with when interacting with AWS services today, but for your own service-to-service communication.

To configure service access controls, you can use access policies. An access policy is an AWS IAM resource policy that can be associated with a service network and individual services. With access policies, you can use the PARC (principal, action, resource, and condition) model to enforce context-specific access controls for services. For example, you can use an access policy to define which services can access a service you own.

Controlling access to VPC Lattice Service access policies at two levels - serivce network-level and service-level.

Amazon VPC Lattice uses AWS Signature Version 4 (SigV4) for client authentication. After the Auth Policy is enabled on the Amazon VPC Lattice Service, it is also necessary to make changes on the service caller side, so that the HTTP requests include the signed Authorization header, as well as other headers such as x-amz-content-sha256, x-amz-date and x-amz-security-token when making HTTP requests. The details of AWS Sig v4 can be found here.

To sign the request for Amazon VPC Lattice services, so far we know the customer has the following options:

  • Use the AWS SDK to sign the request with the corresponding programming language. This solution has the optimal performance, but it requires code changes for the developer inside the application. The implementation can be found in the Amazon VPC Lattice Docs.
  • Use AWS SIGv4 Proxy Admission Controller and use AWS SIGv4 Proxy to forward HTTP request and add AWS Sigv4 headers. The details is covered in this post. However, the above solution comes with one limitation: when AWS SIGv4 Proxy Admission Controller is used, only single host is supported. In the example manifest, you can see that the front-end container is making requests to localhost:8005 and the host header is replaced with datastore-lambda.sarathy.io statically defined in the sidecar.aws.signing-proxy/host annotation. In other words, the caller service can connect to only one Amazon VPC Lattice service. There will be challenges if the client is connecting to multiple Amazon VPC lattice services.

In this post I demonstrate an optimized solution that’s fully transparent and supports connecting to multiple Amazon VPC Lattice services.

First, we introduce an init and sidecar container in the Kubernetes pod:

  • init container: running the iptables utility to intercept any traffic to Amazon VPC Lattice services to the AWS SigV4 Proxy, which listens to port 8080.
  • sigv4 proxy: run with the args including --name vpc-lattice-svcs, --unsigned-payload flag and logging options. The proxy container will automatically sign requests using the credentials obtained by AWS IAM role for Service Account in Amazon EKS.

Second, inject the init and sidecar container automatically, so that the existing Kubernetes manifest won’t be modified by developer teams. We use Kyverno as the policy engine, which is designed for Kubernetes and runs as a dynamic admission controller in a Kubernetes cluster. In this case, Kyverno receives mutating admission webhook HTTP callbacks from the Kubernetes API server, and applies matching policies to return results that enforce admission policies. In other words, Kyverno can automatically inject the sidecar and init containers automatically without any required coding.

The architecture of injecting AWS Sigv4 sidecar proxy to the caller service using Kyverno.

Walkthrough

Amazon VPC Lattice with Auth Policy in Amazon EKS

Prerequisites

  • An AWS account with the Administrator permission
  • Installation of AWS Command Line Interface (AWS CLI), kubectl, eksctl, and Git

Prepare the Amazon EKS cluster and Amazon VPC Lattice services

We need to prepare the environment to test our solution.

Deploy the sample httpbin application as the Amazon VPC Lattice Service

Run the following commands to deploy httpbin as the Amazon VPC Lattice Service:

git clone https://github.com/aws/aws-application-networking-k8s.git
cd aws-application-networking-k8s/examples
## Create the GatewayClass, Gateway, HTTPRoute, Service and Deployment objects
kubectl apply -f gatewayclass.yaml
kubectl apply -f my-hotel-gateway.yaml
kubectl apply -f httpbin.yaml
kubectl apply -f httpbin-route.yaml

## Create another VPC Lattice Service (HTTPRoute), Service and Deployment object  
cat << EOF > another-httpbin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: another-httpbin
  labels:
    app: another-httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: another-httpbin
  template:
    metadata:
      labels:
        app: another-httpbin
    spec:
      containers:
      - name: httpbin
        image: mccutchen/go-httpbin
---
apiVersion: v1
kind: Service
metadata:
  name: another-httpbin
spec:
  selector:
    app: another-httpbin
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
EOF

cat << EOF > another-httpbin-route.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: another-httpbin
spec:
  parentRefs:
  - name: my-hotel
    sectionName: http
  rules:
  - backendRefs:
    - name: another-httpbin
      kind: Service
      port: 80
EOF

## Another VPC Lattice Service
kubectl apply -f another-httpbin.yaml
kubectl apply -f another-httpbin-route.yaml

Securing service network

To demonstrate this feature, we apply an auth policy on httpbin service, which will only allow authenticated access. You can define granular policies by referring to the documentation.

  • Go to VPC section of AWS Console and select Services under VPC Lattice and then on the service httpbin-default in the right pane.
  • On next page, choose Access and then on Edit access settings.
  • In the resulting Service access screen, select AWS IAM, then select Apply policy template > Allow only authenticated access. Then choose Save changes.

Enable a VPC Lattice Service with IAM Auth Policy

Now we run a test to show that the service requires AWS IAM Authentication, or otherwise it returns an HTTP 403 forbidden error.

kubectl run curl --image alpine/curl -ti -- /bin/sh

curl -v http://httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws
*  Trying 169.254.171.32:80...
* Connected to httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws (169.254.171.32) port 80 (#0)
> GET / HTTP/1.1
> Host: httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws
> User-Agent: curl/8.0.1
> Accept: /

< HTTP/1.1 403 Forbidden
< content-length: 253
< content-type: text/plain
< date: Mon, 31 Jul 2023 07:24:10 GMT
< 
* Connection #0 to host httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws left intact
AccessDeniedException: User: anonymous is not authorized to perform: vpc-lattice-svcs:Invoke on resource: arn:aws:vpc-lattice:us-west-2:091550601287:service/svc-09ff9bb5d43b72048/ because no service-based policy allows the vpc-lattice-svcs:Invoke action#

Prepare the caller app deployment

Here we are going to configure the proxy to use AWS IAM roles for service accounts (IRSA), we can make the proxy use the credentials of an AWS IAM role to sign the requests. By attaching the VPCLatticeServicesInvokeAccess identity-based policy to the AWS IAM role, we can grant permissions to the role to call the Amazon VPC Lattice service.

Creating AWS IAM role for service account

export CLUSTER_NAME=my-cluster
export NAMESPACE=default
export SERVICE_ACCOUNT=default

eksctl create iamserviceaccount \
  --cluster=$CLUSTER_NAME \
  --namespace=$NAMESPACE \
  --name=$SERVICE_ACCOUNT \
  --attach-policy-arn=arn:aws:iam::aws:policy/VPCLatticeServicesInvokeAccess \
  --override-existing-serviceaccounts \
  --approve 

After the preparation is done, we are going to prepare our service caller app deployment with the proxy container. The proxy container will listen to port 8080 and run as user 101. The YAML snippet will be like below:

      - name: sigv4proxy
        image: public.ecr.aws/aws-observability/aws-sigv4-proxy:latest
        args: [
          "--unsigned-payload",
          "--log-failed-requests",
          "-v", "--log-signing-process",
          "--name", "vpc-lattice-svcs",
          "--region", "us-west-2",
          "--upstream-url-scheme", "http"
        ]
        ports:
        - containerPort: 8080
          name: proxy
          protocol: TCP
        securityContext:
          runAsUser: 101 

Now we would like to intercept traffic from the main app, use the iptables utility to route the traffic connecting to Amazon VPC Lattice CIDR 169.254.171.0/24 to EGRESS_PROXY chain, and redirect the traffic to local port 8080. To avoid infinite loops when the traffic is sent by the proxy container, it is identified by checking whether the UID is 101 to ensure that it won’t be redirect again.

      initContainers: # IPTables rules are updated in init container
      - image: public.ecr.aws/d2c6w7a3/iptables
        name: iptables-init
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
        command: # Adding --uid-owner 101 here to prevent traffic from envoy proxy itself from being redirected, which prevents an infinite loop
        - /bin/sh
        - -c
        - >
          iptables -t nat -N EGRESS_PROXY;
          iptables -t nat -A OUTPUT -p tcp -d 169.254.171.0/24 -j EGRESS_PROXY;
          iptables -t nat -A EGRESS_PROXY -m owner --uid-owner 101 -j RETURN;
          iptables -t nat -A EGRESS_PROXY -p tcp -j REDIRECT --to-ports 8080;

The container image public.ecr.aws/d2c6w7a3/iptables is simply a Ubuntu Linux distro base image with iptables installed.

FROM ubuntu:focal
RUN apt update && apt install -y iptables

The complete YAML manifest will look like below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: client-app
  labels:
    app: client-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: client-app
  template:
    metadata:
      labels:
        app: client-app
    spec:
      serviceAccountName: default
      initContainers: # IPTables rules are updated in init container
      - image: public.ecr.aws/d2c6w7a3/iptables
        name: iptables-init
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
        command: # Adding --uid-owner 101 here to prevent traffic from aws-sigv4-proxy proxy itself from being redirected, which prevents an infinite loop
        - /bin/sh
        - -c
        - >
          iptables -t nat -N EGRESS_PROXY;
          iptables -t nat -A OUTPUT -p tcp -d 169.254.171.0/24 -j EGRESS_PROXY;
          iptables -t nat -A EGRESS_PROXY -m owner --uid-owner 101 -j RETURN;
          iptables -t nat -A EGRESS_PROXY -p tcp -j REDIRECT --to-ports 8080;
      containers:
      - name: app
        image: alpine/curl
        command: ["/bin/sh", "-c", "sleep infinity"]
      - name: sigv4proxy
        image: public.ecr.aws/aws-observability/aws-sigv4-proxy:latest
        args: [
          "--unsigned-payload",
          "--log-failed-requests",
          "-v", "--log-signing-process",
          "--name", "vpc-lattice-svcs",
          "--region", "us-west-2",
          "--upstream-url-scheme", "http"
        ]
        ports:
        - containerPort: 8080
          name: proxy
          protocol: TCP
        securityContext:
          runAsUser: 101

We can verify it by running curl and we can see the response of /get, it responses HTTP 200 OK.

➜  kubectl get gateway -o yaml | yq '.items[0].status.addresses[].value'
another-httpbin-default-03422a15c25e5fca4.7d67968.vpc-lattice-svcs.us-west-2.on.aws
httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws

➜ VPC_LATTICE_SERVICE_ENDPOINT=http://httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get
➜ kubectl exec -c app -ti deploy/client-app -- curl $VPC_LATTICE_SERVICE_ENDPOINT

{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip", 
    "Host": "httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws", 
    "User-Agent": "curl/8.0.1", 
    "X-Amz-Content-Sha256": "UNSIGNED-PAYLOAD", 
    "X-Amzn-Source-Vpc": "vpc-027db8599a32b83e2"
  }, 
  "origin": "192.168.46.245", 
  "url": "http://httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get"
}

We can verify the headers were added by the proxy by running checking the logs of the proxy container. We can verify that
Authorization header, as well as other headers such as x-amz-content-sha256, x-amz-date and x-amz-security-token will be added into the request.

➜ kubectl logs deploy/vpc-lattice-client -c sigv4proxy
time="2023-08-07T10:14:59Z" level=debug msg="signed request" region=us-west-2 service=vpc-lattice-svcs
time="2023-08-07T10:14:59Z" level=debug msg="proxying request" request="GET /get HTTP/1.1\r\nHost: httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws\r\nAccept: */*\r\nAuthorization: AWS4-HMAC-SHA256 Credential=ASIARKUGXKBDQVWU6BFX/20230807/us-west-2/vpc-lattice-svcs/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-security-token, Signature=<redacted>\r\nUser-Agent: curl/8.0.1\r\nX-Amz-Content-Sha256: UNSIGNED-PAYLOAD\r\nX-Amz-Date: 20230807T101459Z\r\nX-Amz-Security-Token: IQoJb3JpZ2luX2VjEMr//////////<redacted>==\r\n\r\n"

Now we can replace the VPC_LATTICE_SERVICE_ENDPOINT to the second hostname. Since we have no application code changes, it is possible to connect to multiple Amazon VPC Lattice services.

➜ VPC_LATTICE_SERVICE_ENDPOINT=http://another-httpbin-default-03422a15c25e5fca4.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get
➜ kubectl exec -c app -ti deploy/client-app -- curl $VPC_LATTICE_SERVICE_ENDPOINT

{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip", 
    "Host": "another-httpbin-default-03422a15c25e5fca4.7d67968.vpc-lattice-svcs.us-west-2.on.aws", 
    "User-Agent": "curl/8.0.1", 
    "X-Amz-Content-Sha256": "UNSIGNED-PAYLOAD", 
    "X-Amzn-Source-Vpc": "vpc-027db8599a32b83e2"
  }, 
  "origin": "192.168.32.152", 
  "url": "http://another-httpbin-default-03422a15c25e5fca4.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get"
}

Using Kyverno to auto inject sidecar and init containers

Now we would like to make use of Kyverno to inject sidecar and init containers automatically. For Clusters with Kyverno installed, we can write a ClusterPolicy for injection. If any deployment object is annotated with vpc-lattices-svcs.amazonaws.com/agent-inject being true, the deployment will be patched with sidecar and init containers.

The environment variable AWS_REGION needs to be specified as well.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: inject-sidecar
  annotations:
    policies.kyverno.io/title: Inject Sidecar Container
spec:
  rules:
  - name: inject-sidecar
    match:
      any:
      - resources:
          kinds:
          - Deployment
    mutate:
      patchStrategicMerge:
        spec:
          template:
            metadata:
              annotations:
                (vpc-lattices-svcs.amazonaws.com/agent-inject): "true"
            spec:
              initContainers: # IPTables rules are updated in init container
              - image: public.ecr.aws/d2c6w7a3/iptables
                name: iptables-init
                securityContext:
                  capabilities:
                    add:
                    - NET_ADMIN
                command: # Adding --uid-owner 101 here to prevent traffic from envoy proxy itself from being redirected, which prevents an infinite loop
                - /bin/sh
                - -c
                - >
                  iptables -t nat -N EGRESS_PROXY;
                  iptables -t nat -A OUTPUT -p tcp -d 169.254.171.0/24 -j EGRESS_PROXY;
                  iptables -t nat -A EGRESS_PROXY -m owner --uid-owner 101 -j RETURN;
                  iptables -t nat -A EGRESS_PROXY -p tcp -j REDIRECT --to-ports 8080;
              containers: 
              - name: sigv4proxy
                env:
                 - name: AWS_REGION
                   value: "us-west-2"
                image: public.ecr.aws/aws-observability/aws-sigv4-proxy:latest
                args: [
                  "--unsigned-payload",
                  "--log-failed-requests",
                  "-v", "--log-signing-process",
                  "--name", "vpc-lattice-svcs",
                  "--region", \$(AWS_REGION),
                  "--upstream-url-scheme", "http"
                ]
                ports:
                - containerPort: 8080
                  name: proxy
                  protocol: TCP
                securityContext:
                  runAsUser: 101

With this approach, when Kubernetes deployment YAML is annotated with vpc-lattices-svcs.amazonaws.com/agent-inject: "true“, the result is that the deployment will be injected with the sidecar and init containers.

The client YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vpc-lattice-client
  labels:
    app: vpc-lattice-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vpc-lattice-client
  template:
    metadata:
      labels:
        app: vpc-lattice-client
      annotations:
       vpc-lattices-svcs.amazonaws.com/agent-inject: "true"
    spec:
      serviceAccountName: default
      containers:
      - name: app
        image: alpine:curl
        command: ["/bin/sh", "-c", "sleep infinity"]

The sidecar is injected automatically.

➜ kubectl describe deploy vpc-lattice-client

Name: vpc-lattice-client
Namespace: default
CreationTimestamp: Thu, 20 Jul 2023 11:01:32 +0800
Labels: app=vpc-lattice-client
Annotations: deployment.kubernetes.io/revision: 1
policies.kyverno.io/last-applied-patches: inject-sidecar.inject-sidecar.kyverno.io: added /spec/template/spec/containers/0
...

The patched YAML manifest will look like:

➜ kubectl get deploy vpc-lattice-client -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"vpc-lattice-client"},"name":"vpc-lattice-client","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"vpc-lattice-client"}},"template":{"metadata":{"annotations":{"vpc-lattices-svcs.amazonaws.com/agent-inject":"true"},"labels":{"app":"vpc-lattice-client"}},"spec":{"containers":[{"command":["/bin/sh","-c","sleep infinity"],"env":[{"name":"HTTP_PROXY","value":"localhost:8080"}],"image":"nicolaka/netshoot","name":"app"}],"serviceAccountName":"default"}}}}
    policies.kyverno.io/last-applied-patches: |
      inject-sidecar.inject-sidecar.kyverno.io: added /spec/template/spec/containers/0
  creationTimestamp: "2023-07-20T03:01:
  labels:
    app: vpc-lattice-client
  name: vpc-lattice-client
  namespace: default
spec:
  selector:
    matchLabels:
      app: vpc-lattice-client
  template:
    metadata:
      annotations:
        vpc-lattices-svcs.amazonaws.com/agent-inject: "true"
      creationTimestamp: null
      labels:
        app: vpc-lattice-client
    spec:
      containers:
      - args:
        - --unsigned-payload
        - --log-failed-requests
        - -v
        - --log-signing-process
        - --name
        - vpc-lattice-svcs
        - --region
        - $(AWS_REGION)
        - --upstream-url-scheme
        - http
        env:
        - name: AWS_REGION
          value: us-west-2
        image: public.ecr.aws/aws-observability/aws-sigv4-proxy:latest
        imagePullPolicy: Always
        name: sigv4proxy
        ports:
        - containerPort: 8080
          name: proxy
          protocol: TCP
        resources: {}
        securityContext:
          runAsUser: 101
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - command:
        - /bin/sh
        - -c
        - sleep infinity
        image: alpine:curl
        imagePullPolicy: Always
        name: app
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          iptables -t nat -N EGRESS_PROXY; iptables -t nat -A OUTPUT -p tcp -d 169.254.171.0/24 -j EGRESS_PROXY; iptables -t nat -A EGRESS_PROXY -m owner --uid-owner 101 -j RETURN; iptables -t nat -A EGRESS_PROXY -p tcp -j REDIRECT --to-ports 8080;
        image: public.ecr.aws/d2c6w7a3/iptables
        name: iptables-init
        securityContext:
          capabilities:
            add:
            - NET_ADMIN

  ...

Also we can verify that the client can access to Amazon VPC Lattice Service successfully:

❯ VPC_LATTICE_SERVICE_ENDPOINT=http://httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get
kubectl exec -c app -ti deploy/vpc-lattice-client -- curl $VPC_LATTICE_SERVICE_ENDPOINT

{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip", 
    "Host": "httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws", 
    "User-Agent": "curl/8.0.1", 
    "X-Amz-Content-Sha256": "UNSIGNED-PAYLOAD", 
    "X-Amzn-Source-Vpc": "vpc-027db8599a32b83e2"
  }, 
  "origin": "192.168.32.152", 
  "url": "http://httpbin-default-09ff9bb5d43b72048.7d67968.vpc-lattice-svcs.us-west-2.on.aws/get"
}

Cleaning up

To avoid incurring future charges, delete all resources, including the Amazon VPC Lattice resource and Amazon EKS cluster using the following commands:

kubectl delete -f httpbin.yaml
kubectl delete -f httpbin-route.yaml
kubectl delete -f another-httpbin.yaml
kubectl delete -f another-httpbin-route.yaml
kubectl delete -f my-hotel-gateway.yaml
kubectl delete -f gatewayclass.yaml

eksctl delete cluster -f $CLUSTER_CONFIG

Conclusion

In the post, we showed you how to implement AWS IAM authentication in Amazon VPC Lattice with the following solution:

  • Use an init container to run iptables commands to intercept traffic to VPC Lattice
  • Use Kyverno to inject sidecar and init containers automatically
  • The caller service will be able to connect multiple services with IAM authentication in the VPC Lattice Service Network.

We hope the information shared in the blog can be useful if you are building a solution based on VPC Lattice and you would like to take advantage of IAM Authentication of VPC Lattices to enhance your security posture. For more information about Amazon VPC Lattice, you can refer to the documentation and additional blogs.

Darren Lin

Darren Lin

Darren Lin is a Cloud Native Specialist Solutions Architect at AWS who focuses on domains such as Linux, Kubernetes, Container, Observability, and Open Source Technologies. In his spare time, he likes to work out and have fun with his family.

Frank Fan

Frank Fan

Frank Fan, an AWS Senior Container Specialist Solution Architect, brings a wealth of experience with over 20 years in designing and implementing large-scale technology transformations. As an advocate for application modernization, he specializes in containerization and overseeing migration and modernization initiatives on a large scale. Frank holds certifications as a Certified Kubernetes Application Developer, AWS Certified Solution Architect - Professional, and VMware Certified Design Expert.