AWS Open Source Blog

Deploying the Heptio Authenticator to kops

将 Heptio Authenticator 部署到 kops

 

Authenticator Flow

The Kubernetes 1.10 release has included alpha support for the client-go package to process external ExecCredential providers. This is being used to power the authentication against Amazon Elastic Container Service for Kubernetes (EKS) clusters while still following one of the core tenets of EKS; providing a 100% upstream Kubernetes. To make all this possible, we’re using the Heptio Authenticator. This post will walk you through deploying the authenticator to a kops cluster; allowing you to use AWS Identity & Access Management‎ (IAM) to authenticate kubectl

Managing authentication protocols is typically an onerous task, requiring admins to maintain a list of acceptable users, validate permissions on an ongoing basis for each user, prune users that don’t need access, and even periodically recycle token and certificate-based access. The more systems need to be managed, the more complicated these tasks become. That is why heptio, an AWS partner in the Amazon Partner Network, and AWS created the heptio authenticator, which allows you to have federated authentication using AWS IAM.

Getting Started

To get started, we’re going to need a Kubernetes cluster, and the easiest way to get this up and running is to use kops. First, you’re going to need to install the kops binary. The different installation options are explained in the kops documentation. If you are using macOS, you can follow along here.

brew update && brew install kops

After this has installed, verify by running.

$ kops version
Version 1.9.0

You will also need the Kubernetes command line tool, kubectl; you can install this using Homebrew as well.

brew install kubernetes-cli

Next we need to have an IAM user with the following permissions:

  • AmazonEC2FullAccess
  • AmazonRoute53FullAccess
  • AmazonS3FullAccess
  • IAMFullAccess
  • AmazonVPCFullAccess

Alternatively, a new IAM user may be created and the policies attached as explained at github.com/kubernetes/kops/blob/master/docs/aws.md#setup-iam-user.

The last dependency we need to install is the heptio authenticator. The easiest way to install this today is using go get. This requires that you have Golang installed on your machine. If you do not, please follow the directions here for your operating system. Once you have Golang installed, we can install the authenticator.

go get -u -v github.com/heptio/authenticator/cmd/heptio-authenticator-aws

make sure heptio-authenticator-aws is in your $PATH by trying to run the binary.

heptio-authenticator-aws help

If this fails with -bash: heptio-authenticator-aws: command not found then you will need to export a PATH including the $GOPATH/bin directory. Otherwise, continue to Create Cluster.

Create Cluster

Now that we have all the dependencies out of the way, let’s create your kops cluster. This is as simple as running one command:

export NAME=authenticator.$(cat /dev/random | LC_ALL=C tr -dc "[:alpha:]" | tr '[:upper:]' '[:lower:]' | head -c 10).k8s.local
export KOPS_STATE_STORE=s3://${NAME}
aws s3 mb $KOPS_STATE_STORE
kops create cluster \
    --zones us-west-1a \
    --name ${NAME} \
    --yes

If you’d like to deploy your cluster in a region besides us-west-1 make sure to change the --zones key to an availability zone in your region.

These commands will make a random $NAME that can be used for the bucket and the cluster, create the Amazon S3 bucket for storing cluster state, and, finally, create the cluster.

Now you can verify the status of the cluster by running the validate command.

kops validate cluster

This process can take five to ten minutes to set up. When the response ends with Your cluster {{NAME}} is ready you are ready to start using the cluster. To get the Heptio authenticator installed, we need to first modify the kops configuration:

kops edit cluster

This will open your $EDITOR to the kops cluster manifest file. In this file, under .spec.api, we’re going to add the following.

# ...
kubeAPIServer:
  authenticationTokenWebhookConfigFile: /srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml
hooks:
- name: kops-hook-authenticator-config.service
  before:
    - kubelet.service
  roles: [Master]
  manifest: |
    [Unit]
    Description=Download Heptio AWS Authenticator configs from S3
    [Service]
    Type=oneshot
    ExecStart=/bin/mkdir -p /srv/kubernetes/heptio-authenticator-aws
    ExecStart=/usr/local/bin/aws s3 cp --recursive {{KOPS_STATE_STORE}}/{{NAME}}/addons/authenticator /srv/kubernetes/heptio-authenticator-aws/

Make sure to replace {{KOPS_STATE_STORE}} and {{NAME}} with the values from your created cluster. This will add a new systemd unit file that will download all the configurations from your KOPS_STATE_STORE Amazon S3 bucket; we’ll add these files next. This also tells the apiserver to use the Webhook authentication config that we’ll provide.

If you are adding this to an existing cluster and you are using a non-default AMI, you need to verify that the AWS command line interface is available. If not, you can change the ExecStart to use Docker instead.

ExecStart=/usr/bin/docker run --net=host --rm -v /srv/kubernetes/heptio-authenticator-aws:/srv/kubernetes/heptio-authenticator-aws quay.io/coreos/awscli@sha256:7b893bfb22ac582587798b011024f40871cd7424b9026595fd99c2b69492791d aws s3 cp --recursive {{KOPS_STATE_STORE}}/{{NAME}}/addons/authenticator /srv/kubernetes/heptio-authenticator-aws/

After you are done, save and close this file. Next, we’ll create the files needed for the authentication configuration.

heptio-authenticator-aws init -i $NAME
aws s3 cp cert.pem ${KOPS_STATE_STORE}/${NAME}/addons/authenticator/cert.pem
aws s3 cp key.pem ${KOPS_STATE_STORE}/${NAME}/addons/authenticator/key.pem
aws s3 cp heptio-authenticator-aws.kubeconfig ${KOPS_STATE_STORE}/${NAME}/addons/authenticator/kubeconfig.yaml

Now that we have these created and uploaded to the state store, we can issue an update then a rolling-update to release new images. These commands can take five to ten minutes to process.

kops update cluster --yes
kops rolling-update cluster --yes

Create Policy

Now we can test that our default KubernetesAdmin user still has access to the cluster by running kubectl get nodes –this should return the nodes that are connected to your cluster.

Before we can give anyone access to the cluster, we first need to create the AWS IAM Role and Trust Policy for our additional admin user. This can be done either via the AWS Console or using the AWS CLI.

# Get your account ID
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')

# Define a role trust policy that opens the role to users in your account (limited by IAM policy)
POLICY=$(echo -n '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::'; echo -n "$ACCOUNT_ID"; echo -n ':root"},"Action":"sts:AssumeRole","Condition":{}}]}')

# Create a role named KubernetesAdmin (will print the new role's ARN)
aws iam create-role \
  --role-name KubernetesAdmin \
  --description "Kubernetes administrator role (for Heptio Authenticator for AWS)." \
  --assume-role-policy-document "$POLICY" \
  --output text \
  --query 'Role.Arn'

Now we can create a ConfigMap that defines the AWS IAM roles that have access to the cluster.

# authenticator.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: kube-system
  name: heptio-authenticator-aws
  labels:
    k8s-app: heptio-authenticator-aws
data:
  config.yaml: |    
    clusterID: {{NAME}}
    server:
      mapRoles:
      - roleARN: arn:aws:iam::{{ACCOUNT_ID}}:role/KubernetesAdmin
        username: kubernetes-admin
        groups:
        - system:masters

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: heptio-authenticator-aws
  labels:
    k8s-app: heptio-authenticator-aws
spec:
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      labels:
        k8s-app: heptio-authenticator-aws
    spec:
      # run on the host network (don't depend on CNI)
      hostNetwork: true

      # run on each master node
      nodeSelector:
        node-role.kubernetes.io/master: ""
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists

      # run `heptio-authenticator-aws server` with three volumes
      # - config (mounted from the ConfigMap at /etc/heptio-authenticator-aws/config.yaml)
      # - state (persisted TLS certificate and keys, mounted from the host)
      # - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
      containers:
      - name: heptio-authenticator-aws
        image: gcr.io/heptio-images/authenticator:v0.3.0
        args:
        - server
        - --config=/etc/heptio-authenticator-aws/config.yaml
        - --state-dir=/var/heptio-authenticator-aws
        - --kubeconfig-pregenerated

        resources:
          requests:
            memory: 20Mi
            cpu: 10m
          limits:
            memory: 20Mi
            cpu: 100m

        volumeMounts:
        - name: config
          mountPath: /etc/heptio-authenticator-aws/
        - name: state
          mountPath: /var/heptio-authenticator-aws/
        - name: output
          mountPath: /etc/kubernetes/heptio-authenticator-aws/
      volumes:
      - name: config
        configMap:
          name: heptio-authenticator-aws
      - name: output
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/
      - name: state
        hostPath:
          path: /srv/kubernetes/heptio-authenticator-aws/

Now we can apply this config to our cluster to deploy the service and the allowed roles for authentication:

kubectl apply -f authenticator.yaml

Once we have this deployed, we need to make a new user in our kubeconfig. Do so by opening ~/.kube/config with your favorite editor. Next you can create a user by replacing {{NAME}} with your cluster name and {{ACCOUNT_ID}} with your AWS Account ID.

# ...
users:
- name: {{NAME}}.exec
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "{{NAME}}"
        - "-r"
        - "arn:aws:iam::{{ACCOUNT_ID}}:role/KubernetesAdmin"

Then we’ll want to modify our context to reference this new user:

kubectl config set-context {{NAME}} --user={{NAME}}.exec

With all of this in place, we can test authenticating against our cluster:

$ kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION
ip-172-20-39-241.us-west-1.compute.internal   Ready     node      5h        v1.9.3
ip-172-20-39-253.us-west-1.compute.internal   Ready     master    3h        v1.9.3
ip-172-20-48-164.us-west-1.compute.internal   Ready     node      5h        v1.9.3

If you see the nodes of your cluster listed, the authenticator was deployed properly and is using STS to verify the users’ identity.

Teardown

If you’d like to continue to use this cluster, you can leave it running. If you’d like to shut the instances down, you can do so by calling the kop delete cluster command:

kops delete cluster --name ${NAME} --yes

Conclusion

The Heptio authenticator gives you the ability to federate your Kubernetes apiserver authentication out to AWS IAM, allowing you to set up granular IAM role-based groups that grant granular Kubernetes RBAC rules. No longer will you have to issue complex commands to manage keys and certificates that grant kubectl access.

Thanks to Peter Rifel for creating the write-up on Github Heptio Authenticator.

Read more from Chris.

Chris Hein

Chris Hein

Chris Hein is a Partner Solutions Architect for the Amazon Partner Network where he specializes in all things containers. Before Amazon, Chris worked for a number of large and small companies like GoPro, Sproutling, & Mattel. Follow him at @christopherhein