AWS Open Source Blog
Deploying the AWS IAM Authenticator to kops
This post is an updated version of Deploying the Heptio Authenticator to kops. Heptio Authenticator has since been donated to the Cloud Provider Special Interest Group (SIG), allowing the project to be collaboratively worked on. Now, instead of needing to manually configure the Authenticator, you can use
kops
primitives to deploy automatically when a cluster is created. This post describes this newer, simpler process.
Managing authentication protocols is typically an onerous task, requiring admins to maintain a list of acceptable users, validate permissions on an ongoing basis for each user, prune users that don’t need access, and even periodically recycle token- and certificate-based access. The more systems need to be managed, the more complicated these tasks become. That is why Heptio, an AWS partner in the Amazon Partner Network, and AWS created the AWS IAM Authenticator, which allows you to have federated authentication using AWS Identity and Access Management (IAM).
Getting Started
To get started, you’ll need a Kubernetes cluster, and the easiest way to get this up and running is to use kops. First step is to install the kops
binary (the various installation options are explained in the kops documentation). If you’re using macOS, you can follow along here:
brew update && brew install kops
After this has installed, verify by running:
$ kops version
Version 1.11.1 (git-0f2aa8d30)
You will also need the Kubernetes command line tool, kubectl
; you can install this using Homebrew as well:
brew install kubernetes-cli
Next, you need to have an IAM user with the following permissions:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
Alternatively, a new IAM user may be created and the policies attached as explained in Set up your [kops] environment.
The last dependency you need to install is the aws-iam-authenticator. The easiest way to install this today is using go get
, which requires that you have Golang installed on your machine. If you do not, please follow the Go install instructions appropriate to your operating system. Once you have Golang installed, you can install the authenticator:
go get -u -v sigs.k8s.io/aws-iam-authenticator
Make sure aws-iam-authenticator
is in your $PATH
by trying to run the binary:
aws-iam-authenticator help
If this fails with -bash: aws-iam-authenticator: command not found
, then you will need to export
a PATH
including the $GOPATH/bin
directory (otherwise, continue to Create Cluster below):
export PATH=${PATH}:$GOPATH/bin
Create Cluster
Now that you have all the dependencies out of the way, let’s create the scaffold for your kops
cluster. This is as simple as running one command:
export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export NAME=authenticator.$(cat /dev/random | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 10).k8s.local
export KOPS_STATE_STORE=s3://${NAME}
aws s3 mb $KOPS_STATE_STORE
kops create cluster \
--zones us-west-1a \
--name ${NAME}
If you’d like to deploy your cluster in a different region than us-west-1
, make sure to change the --zones
key to an availability zone in your region.
These commands will create a random $NAME
that can be used for the bucket and the cluster, create the Amazon S3 bucket for storing cluster state, and write the cluster manifest to the bucket.
Now that you have the cluster manifest, you can modify it to automatically deploy the aws-iam-authenticator
. To do this, you need to kops edit cluster
:
kops edit cluster --name ${NAME}
This command will open up an $EDITOR
session displaying the cluster manifest stored in Amazon S3. From here we can add two keys under spec,
authorization.rbac
and authentication.aws
. When applied, this will configure the control plane to automatically configure Kubernetes RBAC and deploy the AWS IAM Authenticator.
# ...
spec:
# ...
authentication:
aws: {}
authorization:
rbac: {}
Now save and close this file. After it’s been saved, you need to create the kops
cluster by using kops update cluster
:
kops update cluster ${NAME} --yes
Once that is complete, you can verify the status of the cluster by running the validate command:
kops validate cluster
This process can take five to ten minutes. You will eventually get an error message that looks something like this:
Inspect this with kubectl describe pod
:
kubectl describe po -n kube-system -l k8s-app=aws-iam-authenticator
This will show that you have a cluster up, but the aws-iam-authenticator
pod couldn’t be started: the pod is waiting for a ConfigMap
to be created so it can boot. We’ll now create the AWS IAM Policy, Role and ConfigMap
.
Create Policy
Before you can give anyone access to the cluster, you first need to create the AWS IAM Role and Trust Policy for your additional admin
user. You can do this either via the AWS Console or using the AWS CLI:
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
export POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')
aws iam create-role \
--role-name KubernetesAdmin \
--description "Kubernetes administrator role (for AWS IAM Authenticator for AWS)." \
--assume-role-policy-document "$POLICY" \
--output text \
--query 'Role.Arn'
Now you can create a ConfigMap
that defines the AWS IAM roles that have access to the cluster:
cat >aws-auth.yaml <<EOF
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
data:
config.yaml: |
clusterID: ${NAME}
server:
mapRoles:
- roleARN: arn:aws:iam::${ACCOUNT_ID}:role/KubernetesAdmin
username: kubernetes-admin
groups:
- system:masters
EOF
With this file created, you can now apply
this config:
kubectl apply -f aws-auth.yaml
Once this is deployed, you need to make a new user in your kubeconfig
. Do this by opening ~/.kube/config
with your favorite editor. Create a user by replacing ${NAME}
with your cluster name and ${ACCOUNT_ID}
with your Account ID.
# ...
users:
- name: ${NAME}.exec
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${NAME}"
- "-r"
- "arn:aws:iam::${ACCOUNT_ID}:role/KubernetesAdmin"
Then you’ll want to modify your context
to reference this new user:
kubectl config set-context $NAME --user=$NAME.exec
With all of this in place, you can test authenticating against your cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-53-49.us-west-1.compute.internal Ready node 1h v1.11.7
ip-172-20-54-14.us-west-1.compute.internal Ready node 1h v1.11.7
ip-172-20-62-94.us-west-1.compute.internal Ready master 1h v1.11.7
If you see the nodes of your cluster listed, the authenticator was deployed properly and is using STS to verify the users’ identity:
Teardown
If you’d like to continue to use this cluster, you can leave it running. If you’d like to shut the instances down, you can do so by calling the kops delete cluster
command:
kops delete cluster --name ${NAME} --yes
Conclusion
The AWS IAM authenticator gives you the ability to federate your Kubernetes apiserver authentication out to AWS IAM, allowing you to set up granular IAM role-based groups that grant granular Kubernetes RBAC rules. No longer will you have to issue complex commands to manage keys and certificates to grant kubectl
access.
Thanks to Peter Rifel for creating the initial write-up on the the AWS IAM Authenticator.