How can I automate the configuration of HTTP proxy for Amazon EKS worker nodes with user data?

Last updated: 2020-04-09

I want to automate the configuration of HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with user data.

Resolution

1.    Find your cluster's IP CIDR block:

$ kubectl get service kubernetes -o jsonpath='{.spec.clusterIP}'; echo

The preceding command returns either 10.100.0.1 or 172.20.0.1, which means that your cluster IP CIDR block is either 10.100.0.0/16 or 172.20.0.0/16.

2.    Create a ConfigMap file named proxy-env-vars-config.yaml based on the output from the command in step 1.

If the output has an IP from the range 172.20.x.x, then structure your ConfigMap file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
   name: proxy-environment-variables
   namespace: kube-system
data:
   HTTP_PROXY: http://customer.proxy.host:proxy_port
   HTTPS_PROXY: http://customer.proxy.host:proxy_port
   NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com

Note: Replace VPC_CIDR_RANGE with the IPv4 CIDR block of your cluster's VPC.

If the output has an IP from the range 10.100.x.x, then structure your ConfigMap file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
   name: proxy-environment-variables
   namespace: kube-system
data:
   HTTP_PROXY: http://customer.proxy.host:proxy_port
   HTTPS_PROXY: http://customer.proxy.host:proxy_port
   NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com

Note: Replace VPC_CIDR_RANGE with the IPv4 CIDR block of your cluster's VPC.

If you're building an Amazon EKS cluster with private API server endpoint access, private subnets, and no internet access, then you must create and add endpoints for the following:

Amazon Elastic Container Registry (Amazon ECR)
Amazon Simple Storage Service (Amazon S3)
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Virtual Private Cloud (Amazon VPC)

For example, you can use the following endpoints:

api.ecr.us-east-1.amazonaws.com
dkr.ecr.us-east-1.amazonaws.com
s3.amazonaws.com
s3.us-east-1.amazonaws.com
ec2.us-east-1.amazonaws.com

Important: You must add the public endpoint subdomain to the NO_PROXY variable. For example, add the .s3.us-east-1.amazonaws.com domain for Amazon S3 in the us-east-1 AWS Region.

3.    Verify that the NO_PROXY variable in configmap/proxy-environment-variables (used by the kube-proxy and aws-node pods) includes the Kubernetes cluster IP address space. For example, 10.100.0.0/16 is used in the preceding code sample for the ConfigMap file where the IP range is from 10.100.x.x.

4.    Apply the ConfigMap:

$ kubectl apply -f /path/to/yaml/proxy-env-vars-config.yaml

5.    To configure the Docker daemon and kubelet, inject user data into your worker nodes. See the following example:

Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version:  1.0
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"

#Set the proxy hostname and port
PROXY="proxy.local:3128"
MAC=$(curl -s http://169.254.169.254/latest/meta-data/mac/)
VPC_CIDR=$(curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')

#Create the docker systemd directory
mkdir -p /etc/systemd/system/docker.service.d

#Configure yum to use the proxy
cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf
proxy=http://$PROXY
EOF

#Set the proxy for future processes, and use as an include file
cloud-init-per instance proxy_config cat << EOF >> /etc/environment
http_proxy=http://$PROXY
https_proxy=http://$PROXY
HTTP_PROXY=http://$PROXY
HTTPS_PROXY=http://$PROXY
no_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
NO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
EOF

#Configure docker with the proxy
cloud-init-per instance docker_proxy_config tee <<EOF /etc/systemd/system/docker.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF

#Configure the kubelet with the proxy
cloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF

#Reload the daemon and restart docker to reflect proxy configuration at launch of instance
cloud-init-per instance reload_daemon systemctl daemon-reload 
cloud-init-per instance enable_docker systemctl enable --now --no-block docker

--==BOUNDARY==
Content-Type:text/x-shellscript; charset="us-ascii"

#!/bin/bash
set -o xtrace

#Set the proxy variables before running the bootstrap.sh script
set -a
source /etc/environment

/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}

# Use the cfn-signal only if the node is created through an AWS CloudFormation stack and needs to signal back to an AWS CloudFormation resource (CFN_RESOURCE_LOGICAL_NAME) that waits for a signal from this EC2 instance to progress through either:
# - CreationPolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html
# - UpdatePolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
# cfn-signal will signal back to AWS CloudFormation using https transport, so set the proxy for an HTTPS connection to AWS CloudFormation
/opt/aws/bin/cfn-signal
    --exit-code $? \
    --stack  ${AWS::StackName} \
    --resource CFN_RESOURCE_LOGICAL_NAME  \
    --region ${AWS::Region} \
    --https-proxy $HTTPS_PROXY

--==BOUNDARY==--

Important: You must update or create yum, docker, and kubelet configuration files before starting the Docker daemon and kubelet.

For an example of user data injected into worker nodes using an AWS CloudFormation template that's launched from the AWS Management Console, see Launching Amazon EKS Linux Worker Nodes.

6.    Update the aws-node and kube-proxy pods:

$ kubectl patch -n kube-system -p '{ "spec": {"template": { "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node
$ kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy

If you change the ConfigMap, then apply the updates, and set the ConfigMap in the pods again. See the following example:

$ kubectl set env daemonset/kube-proxy --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'
$ kubectl set env daemonset/aws-node --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'

Important: You must update all YAML modifications to the Kubernetes objects kube-proxy or aws-node when these objects are upgraded. To update a ConfigMap to a default value, use the eksctl utils update-kube-proxy or eksctl utils update-aws-node commands.

Tip: If the proxy loses connectivity to the API server, then the proxy becomes a single point of failure and your cluster's behavior can be unpredictable. To prevent your proxy from becoming a single point of failure, run your proxy behind a service discovery namespace or load balancer.

7.    Confirm that the proxy variables are used in the kube-proxy and aws-node pods:

$ kubectl describe pod kube-proxy-xxxx -n kube-system

The output should like similar to the following:

Environment:
      HTTPS_PROXY:  <set to the key 'HTTPS_PROXY' of config map 'proxy-environment-variables'>  Optional: false
      HTTP_PROXY:   <set to the key 'HTTP_PROXY' of config map 'proxy-environment-variables'>   Optional: false
      NO_PROXY:     <set to the key 'NO_PROXY' of config map 'proxy-environment-variables'>     Optional: false

If you're not using AWS PrivateLink, verify access to API endpoints through a proxy server for Amazon EC2, Amazon ECR, and Amazon S3.


Did this article help you?

Anything we could improve?


Need more help?