AWS Open Source Blog

Authenticating to EKS Using GitHub Credentials with Teleport

July 15, 2020 update: Gravitational has updated the instructions for using Teleport with EKS to account for the latest changes in both products. Please see the Gravitational documentation for further details. 

This post describes how to configure Gravitational’s Teleport as an authentication proxy for Amazon Elastic Kubernetes Service (Amazon EKS), using GitHub as the identity provider for authenticating users. In this example, Teleport is installed onto a stand-alone EC2 instance and configured to use GitHub authentication to authenticate users identities. Once authenticated, the role assigned to the EC2 instance allows the user to impersonate a Kubernetes RBAC role, giving them access to the EKS cluster.

Teleport

Teleport is an open source solution from Gravitational that can be configured as a proxy for administering a Kubernetes cluster. The open source version includes support for GitHub authentication, i.e. you can use GitHub as an identity provider for authenticating users. You can also extended Teleport’s session recording and audit log to Kubernetes. For example, regular kubectl exec commands are logged into the audit log and the interactive commands are recorded as regular sessions that can be stored and replayed in the future. The commercial version of Teleport adds support for RBAC and SSO using OAuth/OIDC or SAML; see Teleport Enterprise for more information.

EKS authentication

EKS currently supports two types of authentication: bearer/service account tokens and IAM authentication which uses webhook token authentication. When users call the Kubernetes API, a webhook passes an authentication token included in the request to IAM. The token, a base 64 signed URL, is generated by the AWS Command Line Interface (AWS CLI). In earlier versions of EKS, this was accomplished using the aws-iam-authenticator binary.

While IAM authentication is adequate for a majority of use cases, interest in other types of authentication like OIDC has been steadily growing, particularly among organizations where creating new IAM user accounts is politically challenging. With a solution like Teleport, these organizations can use alternate types of authentication, such as GitHub, with EKS.

How does it work?

Teleport leverages user impersonation, where a user can act as another user through impersonation headers. When you integrate Teleport with EKS, you assign to the instance that Teleport runs on an IAM role mapped to a Kubernetes RBAC role which grants it the ability to impersonate other groups. Once users are authenticated by Teleport, the role granted to the instance allows them to assume the permissions of the Kubernetes group specified in the Teleport configuration.

Use cases

An alternative to Route 53 resolver endpoints

When you configure the EKS cluster endpoint to be private, the name of the EKS cluster can only be resolved from within the worker node VPC. As an alternative to creating Route 53 inbound endpoints (which cost approximately $90/month), you can run a Teleport proxy [in each worker node VPC] on a t3.small instance for as little as $14/month.

Dynamic team environments

There are times when creating IAM users is not practical, e.g. when the composition of a development team is changing frequently or when the time to create an IAM user is excessively long. Delegating cluster access to a team lead who can control access through Teleport and GitHub can streamline the onboarding process for new team members.

Prerequisites

You’ll need a functioning EKS cluster. If you’re unfamiliar with creating an EKS cluster, see eksctl.io.

Integrating Teleport with EKS

Create cluster role and role binding

The first step is to create a cluster role and role binding that will allow the EC2 instance to impersonate other users, groups, and service accounts.

cat << 'EOF' | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: teleport-impersonation
rules:
- apiGroups:
  - ""
  resources:
  - users
  - groups
  - serviceaccounts
  verbs:
  - impersonate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: teleport-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: teleport-impersonation
subjects:
- kind: Group
  name: teleport-group
- kind: User
  name: system:anonymous
EOF

Requests that are not rejected by other configured authentication methods are treated as anonymous requests, and given a username of system:anonymous and a group of system:unauthenticated.

Create IAM trust policy document

This is the trust policy that allows the EC2 instance to assume a role.

cat > [filename] << 'EOF'
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Create IAM role

ROLE_ARN=$(aws iam create-role --role-name teleport-role --assume-role-policy-document file://[filename] | jq -r '.Role.Arn')

Create IAM policy granting list-clusters and describe-cluster permissions (optional)

This policy is necessary to create a kubeconfig file using the aws eks update-kubeconfig command. If you have another mechanism to create a kubeconfig file on the instance that runs Teleport, this step is not required.,

cat > [filename] << 'EOF'
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:DescribeCluster",
                "eks:ListClusters"
            ],
            "Resource": "*"
        }
    ]
}
EOF
POLICY_ARN=$(aws iam create-policy --policy-name teleport-policy --policy-document file://[filename] | jq -r '.Policy.Arn')
aws iam attach-role-policy --role-name teleport-role --policy-arn $POLICY_ARN

Update aws-auth configmap

This maps the IAM role teleport-role to the Kubernetes group teleport-group.

If you used eksctl to create your cluster, you may need to add the mapUsers section to the aws-auth ConfigMap before executing these commands.

ROLE="    - userarn: $ROLE_ARN\n      username: teleport\n      groups:\n        - teleport-group"
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapUsers: \|/{print;print \" $ROLE \ ";next}1" > /tmp/aws-auth-patch.yml 
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"

After it’s patched, your aws-auth ConfigMap should look similar to:

mapUsers: |
- userarn: arn:aws:iam::123456789012:role/teleport-role
username: teleport
groups:
- teleport-group

Installing Teleport

Create EC2 instance

Create an EC2 instance using the Amazon Linux 2 (AL2) AMI on a public subnet in your VPC. Modify the security group associated with that instance to allow port 22 inbound, so you can SSH to the instance after it’s running. You will need security group rules to allow access to ports 3080 and 3022-3026 so that users can access the Teleport server from the Internet. This also allows GitHub to post a response back to the Teleport server. You’ll also need to open port 80 to allow Let’s Encrypt to do HTTP validation when issuing SSL certificates.

Type Protocol Port Range Source
Custom TCP TCP 3022-3026 0.0.0.0/0
Custom TCP TCP 3080 0.0.0.0/0
HTTP TCP 80 0.0.0.0/0
SSH TCP 22 your IP

If you don’t modify the EKS control plane security group to allow port 443 inbound from the Teleport security group, your Teleport instance will not be able to communicate with the Kubernetes API.

Assign role to instance:

aws iam create-instance-profile --instance-profile-name teleport-role
aws iam add-role-to-instance-profile --instance-profile-name teleport-role --role-name teleport-role
aws ec2 associate-iam-instance-profile --iam-instance-profile Name=teleport-role --instance-id [instance_id]

instance_id should be replaced with the instance id of the instance where you intend to install Teleport.

SSH to the instance once it’s in a RUNNING state.

Download pip:

curl -O https://bootstrap.pypa.io/get-pip.py

Install pip:

sudo python get-pip.py

Upgrade AWS CLI:

sudo pip install --upgrade awscli

Download kubectl:

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin

Download Teleport:

curl https://get.gravitational.com/teleport-v4.0.0-linux-amd64-bin.tar.gz -o teleport-v4.0.0-linux-amd64-bin.tar.gz
tar -xzf ./teleport-v4.0.0-linux-amd64-bin.tar.gz

Install Teleport:

sudo ./teleport/install
PATH=$PATH:/usr/local/bin 
source ~/.bashrc

Create kubeconfig:

aws eks update-kubeconfig --name [cluster_name] --region [region]

Configuring Teleport

Create a systemd unit file (by creating a unit file, Teleport will automatically start when the instance is rebooted):

cat > teleport.service << 'EOF'
[Unit]
Description=Teleport SSH Service
After=network.target

[Service]
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/teleport start --config=/etc/teleport.yaml --pid-file=/var/run/teleport.pid
ExecReload=/bin/kill -HUP $MAINPID
PIDFile=/var/run/teleport.pid

[Install]
WantedBy=multi-user.target
EOF

Copy to systemd/system and reload:

sudo mv teleport.service /etc/systemd/system/teleport.service
sudo systemctl daemon-reload

Create SSL certificate for HTTPs. It is absolutely crucial to properly configure TLS for HTTPS when you use Teleport Proxy in production. For simplicity, we are using Let’s Encrypt to issue certificates and nip.io for simple DNS resolution. However, using an Elastic IP and a Route53 domain name would be appropriate for production use cases.

Install certbot from EPEL:

sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
sudo yum-config-manager --enable epel*
sudo yum repolist all

sudo yum install -y certbot python2-certbot-apache

Generate a certificate for our nip.io hostname (update your email address below):

export TELEPORT_PUBLIC_DNS_NAME="$(curl http://169.254.169.254/latest/meta-data/public-hostname | cut -d '.' -f1).nip.io"
echo $TELEPORT_PUBLIC_DNS_NAME
export EMAIL=[yourname@yourdomain.com]

sudo certbot certonly --standalone \
             --preferred-challenges http \
                      -d $TELEPORT_PUBLIC_DNS_NAME \
                      -n \
                      --agree-tos \
                      --email=$EMAIL

Let’s Encrypt certificates are valid only for a short duration. A cron job can be used to regularly renew certificates (Certbot developers recommend renewing twice daily):

echo "39 1,13 * * *       root    certbot renew --no-self-upgrade" | sudo tee -a /etc/crontab
sudo systemctl restart crond

Add a renewal hook to reload teleport configuration upon certificate renewal:

echo "renew_hook = systemctl reload teleport" | sudo tee -a /etc/letsencrypt/renewal/${TELEPORT_PUBLIC_DNS_NAME}.conf

Create config file

If you haven’t already, export the public DNS name of the Teleport instance as an environment variable:

export TELEPORT_PUBLIC_DNS_NAME="$(curl http://169.254.169.254/latest/meta-data/public-hostname | cut -d '.' -f1).nip.io"
  cat > teleport.yaml << EOF 
# By default, this file should be stored in /etc/teleport.yaml

## IMPORTANT ##
#When editing YAML configuration, please pay attention to how your editor handles white space. YAML requires consistent handling of tab characters
# This section of the configuration file applies to all teleport
# services.
teleport:
    # nodename allows to assign an alternative name this node can be reached by.
    # by default it's equal to hostname
    nodename: $TELEPORT_PUBLIC_DNS_NAME
    
    # Data directory where Teleport keeps its data, like keys/users for 
    # authentication (if using the default BoltDB back-end)
    data_dir: /var/lib/teleport

    # Teleport throttles all connections to avoid abuse. These settings allow
    # you to adjust the default limits
    connection_limits:
        max_connections: 1000
        max_users: 250

    # Logging configuration. Possible output values are 'stdout', 'stderr' and 
    # 'syslog'. Possible severity values are INFO, WARN and ERROR (default).
    log:
        output: stderr
        severity: ERROR

    # Type of storage used for keys. You need to configure this to use etcd
    # backend if you want to run Teleport in HA configuration.
    storage:
        type: dir

# This section configures the 'auth service':
auth_service:
    authentication: 
       type: github
    enabled: yes
    # IP and the port to bind to. Other Teleport nodes will be connecting to
    # this port (AKA "Auth API" or "Cluster API") to validate client 
    # certificates 
    listen_addr: 0.0.0.0:3025

# This section configures the 'node service':
ssh_service:
    enabled: yes
    # IP and the port for SSH service to bind to. 
    listen_addr: 0.0.0.0:3022
    # See explanation of labels in "Labeling Nodes" section below
    public_addr: $TELEPORT_PUBLIC_DNS_NAME:3022 
    labels:
        role: master
        type: postgres
    # List (YAML array) of commands to periodically execute and use
    # their output as labels. 
    # See explanation of how this works in "Labeling Nodes" section below
    commands:
    - name: arch
      command: [/usr/bin/uname, -p]
      period: 1h0m0s

# This section configures the 'proxy servie'
proxy_service:
    enabled: yes
    # SSH forwarding/proxy address. Command line (CLI) clients always begin their
    # SSH sessions by connecting to this port
    listen_addr: 0.0.0.0:3023

    # Reverse tunnel listening address. An auth server (CA) can establish an 
    # outbound (from behind the firewall) connection to this address. 
    # This will allow users of the outside CA to connect to behind-the-firewall 
    # nodes.
    tunnel_listen_addr: 0.0.0.0:3024

    # List (array) of other clusters this CA trusts.
    # trusted_clusters:
      # - key_file: /path/to/main-cluster.ca
        # Comma-separated list of OS logins allowed to users of this 
        # trusted cluster
        # allow_logins: john,root
        # Establishes a reverse SSH tunnel from this cluster to the trusted
        # cluster, allowing the trusted cluster users to access nodes of this 
        # cluster
        #tunnel_addr: 80.10.0.12:3024

    # The HTTPS listen address to serve the Web UI and also to authenticate the 
    # command line (CLI) users via password+HOTP
    web_listen_addr: 0.0.0.0:3080

    # TLS certificate for the HTTPS connection. Configuring these properly is 
    # critical for Teleport security.
    https_key_file: /etc/letsencrypt/live/$TELEPORT_PUBLIC_DNS_NAME/privkey.pem
    https_cert_file: /etc/letsencrypt/live/$TELEPORT_PUBLIC_DNS_NAME/fullchain.pem
    kubernetes: 
      enabled: yes
      public_addr: $TELEPORT_PUBLIC_DNS_NAME:3026 
      listen_addr: 0.0.0.0:3026
      kubeconfig_file: /home/ec2-user/.kube/config
EOF

Copy this file to the etc directory:

sudo cp ./teleport.yaml /etc/teleport.yaml

Start Teleport:

sudo systemctl start teleport.service 
systemctl status teleport.service 
journalctl -u teleport.service

Log out of the EC2 instance.

Configuring GitHub

Follow the instructions to create a new organization from scratch.

Create an OAuth app.

In the Homepage URL field, type the public nip.io DNS name of the teleport instance followed by the port 3080:

https://ec2-12-34-56-78.nip.io:3080

In the Authorization callback URL field type the public DNS name of the Teleport instance followed by /v1/webapi/github/callback :

https://ec2-12-34-56-78.nip.io:3080/v1/webapi/github/callback

Once you’re done creating the OAuth app, copy the client ID and secret from the OAuth Apps page under your Organizations settings. You will need these values when configuring GitHub authentication.

Finish configuring Teleport

SSH into the teleport instance.

Create github.yaml:

export TELEPORT_PUBLIC_DNS_NAME="$(curl http://169.254.169.254/latest/meta-data/public-hostname | cut -d '.' -f1).nip.io"

cat > github.yaml << EOF
kind: github
version: v3
metadata:
  # connector name that will be used with 'tsh --auth=github login'
  name: github
spec:
  # client ID of Github OAuth app
  client_id: [your client id]
  # client secret of Github OAuth app
  client_secret: [your client secret]  # connector display name that will be shown on web UI login screen display: Github # callback URL that will be called after successful authentication redirect_url: https://$TELEPORT_PUBLIC_DNS_NAME:3080/v1/webapi/github/callback # mapping of org/team memberships onto allowed logins and roles teams_to_logins: - organization: [your github org name] # Github organization name team: [your github team name] # Github team name within that organization # allowed UNIX logins for team octocats/admins: logins: - ec2-user # list of Kubernetes groups this Github team is allowed to connect to kubernetes_groups: ["system:masters"] EOF

Apply github.yaml:

sudo /usr/local/bin/tctl create -f ./github.yaml

Log out of the EC2 instance.

Configuring the Teleport client

Download Teleport (MAC):

curl -O https://get.gravitational.com/teleport-v4.0.2-darwin-amd64-bin.tar.gz 
tar -xzf teleport-v4.0.2-darwin-amd64-bin.tar.gz

Client binaries for other operating systems can be found at Teleport Community Edition.

Copy TSH binary to path:

cp ./teleport/tsh /usr/local/bin/tsh

Accessing the Kubernetes cluster

Before you run the following command, make a backup of your kubeconfig file as it will get overwritten when you log in to Teleport. If you have KUBECONFIG exported to another file run, ‘unset KUBECONFIG.’

In the tsh login command, type the public nip.io DNS name of the teleport instance followed by the port 3080:

tsh login --proxy=ec2-12-34-56-78.nip.io:3080

This will automatically open a new browser tab for you to sign in to GitHub. If you’re already signed in to GitHub, the page will say “Login successful.”

You should see output similar to the following in your terminal:

If browser window does not open automatically, open it by clicking on the link:
http://127.0.0.1:55811/7c22cd2d-c93b-41f4-ad70-a829e247fff9
> Profile URL: https://ec2-12-34-56-78.nip.io:3080

Logged in as: johnsmith
Cluster: ec2-12-23-56-78.nip.io
Roles: admin*
Logins: ec2-user
Valid until: 2019-07-09 05:07:13 +0800 CST [valid for 12h0m0s]
Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty

Now you have access to the Kubernetes API server with kubectl. Verify that your kubeconfig has been replaced and you are connected through the proxy by running kubectl config get-contexts.

Wrapping up

By using Teleport as an authentication proxy, you can use GitHub identities, in addition to IAM roles and users, to authenticate users who require access to an EKS cluster. This can be particularly useful in environments where obtaining an IAM principal is challenging or time-consuming. It may also prove useful in environments where the team composition is in constant flux.

As always, we welcome feedback on this post. If you have a suggestion or question, please click the view comments button below.

Jeremy Cowan

Jeremy Cowan

Jeremy Cowan is a Specialist Solutions Architect for containers at AWS, although his family thinks he sells "cloud space". Prior to joining AWS, Jeremy worked for several large software vendors, including VMware, Microsoft, and IBM. When he's not working, you can usually find on a trail in the wilderness, far away from technology.

Jon Jozwiak

Jon Jozwiak

Jon Jozwiak is a Sr. Solutions Architect based in Austin, Texas, with over 20 years of IT experience. He loves helping his customers with containers and cloud computing.