Cross account IAM roles for Kubernetes service accounts

With the introduction of IAM roles for services accounts (IRSA), you can create an IAM role specific to your workload’s requirement in Kubernetes. This also enables the security principle of least privilege by creating fine grained roles at a pod level instead of node level. In this blog post, we explore a use case where we need to assume cross account roles for your pod.

Use case

Let’s consider the following scenario, where we have a developer and a shared_content AWS account. The development workflow running in the developer account as a pod in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster needs to access some images, which are stored in the pics S3 bucket in the shared_content account.

Earlier procedure

Prior to IRSA, to access the pics bucket in shared_content account, we perform the following steps:

  1. Create a S3_Pics role in shared_content account, which creates trust relationship between shared_content and developer account.
  2. Attach a policy to S3_Pics role, which allows ReadOnlyAccess only access to the pics bucket.
  3. Attach an Amazon EC2 trust relationship policy to the Amazon EKS (EKS) worker node role in developer account.
  4. Attach a policy to the EKS worker node role, which allows the EKS worker nodes to perform a sts:AssumeRole operation.

One of the major drawbacks of this approach is that all the pods running on the EKS worker nodes have access to the pics bucket in the shared_content account. Additionally, since the Kubernetes scheduler might schedule pods on any of the worker nodes (unless you are using additional pod placement configuration), you would need to attach the policy to all the EKS worker nodes.

IRSA procedure

1. We associate an IAM OpenID Connect provider for our EKS cluster in the developer account.

eksctl utils associate-iam-oidc-provider —name development-cluster --approve

2. Create an IAM OIDC provider in the shared_content account. I will use the IAM console since it automatically fetches the ROOT_CA_THUMBPRINT of the OIDC IdP. However, we recommend verifying the ROOT_CA_THUMBPRINT by manually fetching it.

The Provider URL corresponds to OpenID Connect provider URL from the EKS cluster in the developer account and set the audience to

You can capture the Provider URL using the following statement.

aws eks describe-cluster —name development-cluster --query "cluster.identity.oidc.issuer" --output text

3. Create a role in the shared_content account that provides ReadOnlyAccess to all objects in pics bucket. Users federated by OIDC provider are allowed to assume this role.

 "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::shared_content_account_id:oidc-provider/"
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "": ""

Sample policy document:

"Version": "2012-10-17",
"Statement": [
"Sid": "s3-bucket",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::pics/*"

4. Create a service account in the development-cluster and annotate it with the ARN of the role created in step 3.

$ kubectl create sa s3-shared-content

$ kubectl annotate sa s3-p shared-content shared-content-account-id:role/s3-read-object
serviceaccount/s3- shared-content annotated

5. We now specify the service account in the pod specification. Here’s a snippet of my pod spec:

      serviceAccountName: s3- shared-content
      - image: nginx
        name: nginx

Verify the procedure works

Let’s attach a bash shell to this pod and assume the role with sts:AssumeRoleWithWebIdentity call. The prerequisites can be found here.

$ kubectl exec -it nginx-8578f9978-7dhdx bash

Access the temporary credentials inside your pod.

root@nginx-8578f9978-7dhdx:/# aws sts assume-role-with-web-identity --role-arn $AWS_ROLE_ARN --role-session-name x-account --web-identity-token file://$AWS_WEB_IDENTITY_TOKEN_FILE --duration 1500 > /tmp/temp_creds.txt
root@nginx-8578f9978-7dhdx:/# export AWS_ACCESS_KEY_ID="$(cat /tmp/temp_creds.txt | jq -r ".Credentials.AccessKeyId")"
root@nginx-8578f9978-7dhdx:/# export AWS_SECRET_ACCESS_KEY="$(cat /tmp/temp_creds.txt | jq -r ".Credentials.SecretAccessKey")"
root@nginx-8578f9978-7dhdx:/# export AWS_SESSION_TOKEN="$(cat /tmp^Cemp_creds.txt | jq -r ".Credentials.SessionToken")"

Perform a get object operation.

root@nginx-8578f9978-7dhdx:/# aws s3api get-object —bucket pics --key indoor-cat.jpg mypic.jpg
 "AcceptRanges": "bytes", 
 "ContentType": "image/jpeg", 
 "LastModified": "Sat, 04 Jan 2020 22:18:56 GMT", 
 "ContentLength": 5556, 
 "ETag": "\"f9af61fc922ae24a3ad143e316a67d2d\"", 
 "Metadata": {}

Thus, we were able to successfully access cross account resources in our pod using IRSA.

Note: If you are performing blue/green updates of EKS cluster, i.e. provisioning a new EKS cluster with a different version and migrating applications to the newer cluster, since the newer cluster is associated with a different OIDC provider URL this can lead to changes in the roles. To avoid this, you can create a policy in the developer account that allows the pod to assume a role in the shared_content account and then within the application, you can perform a standard sts:AssumeRole operation.


IRSA can be easily extended to enable our pods to access cross account resources in a secure manner. An OIDC IdP provider is implicitly created with the Amazon EKS cluster. Pods running in the EKS cluster are authenticated with IdP, thus, roles that allow entities federated by the OIDC provider allow cross-account access. This significantly improves the overall security posture since you no longer have to assign roles at the Kubernetes worker node level.

Amit Borulkar

Amit Borulkar

Amit is a Principal Solutions Architect with Amazon Web Services (AWS) focused on helping customers craft highly resilient and scalable cloud architectures which address their business problems. He also holds a Masters degree in Computer Science from North Carolina State University.