How can I automate the configuration of HTTP proxy for Amazon EKS worker nodes with user data?

Last updated: 2019-10-03

How can I automate the configuration of HTTP proxy for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with user data?

Resolution

1.    To get the IP CIDR block of your cluster, run the following command:

kubectl get service kubernetes -o jsonpath='{.spec.clusterIP}'; echo

This returns either 10.100.0.1, or 172.20.0.1, which means that your cluster IP CIDR block is either 10.100.0.0/16 or 172.20.0.0/16.

2.    Create a ConfigMap file named proxy-env-vars-config.yaml.

If the output from the command in step 1 has an IP from the range 172.20.x.x, then structure your ConfigMap file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-environment-variables
namespace: kube-system
data:
HTTPS_PROXY:http://customer.proxy.host:proxy_port
HTTP_PROXY: http://customer.proxy.host:proxy_port
NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,.s3.amazonaws.com,.s3.us-east-1.amazonaws.com

If the output from the command in step 1 has an IP from the range 10.100.x.x, then structure your ConfigMap file as follows:

apiVersion: v1
kind: ConfigMap
metadata:
name: proxy-environment-variables
namespace: kube-system
data:
HTTPS_PROXY:http://customer.proxy.host:proxy_port
HTTP_PROXY: http://customer.proxy.host:proxy_port
NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,.s3.amazonaws.com,.s3.us-east-1.amazonaws.com

3.    To apply the ConfigMap, run the following command:

kubectl apply -f /path/to/yaml/proxy-env-vars-config.yaml

Consider the following:

  • If you use a VPC endpoint, add a public endpoint subdomain to NO_PROXY (for example, with an Amazon Simple Storage Service (Amazon S3) endpoint in us-east-1).
  • You don't need a proxy configuration to communicate, because the kube-dns pod communicates directly with the Kubernetes service.
  • Verify that the NO_PROXY variable in the proxy-environment-variables ConfigMap (used by the kube-proxy and aws-node pods) includes the Kubernetes cluster IP address space.

4.    Bootstrap worker nodes to configure the Docker daemon and kubelet by injecting user data into your worker nodes. See the following example.

Important: You must update or create yum, Docker, and kubelet configuration files before starting the Docker daemon and kubelet.

For an example of user data injected into worker nodes using an AWS CloudFormation template that's launched from the AWS Management Console, see Launching Amazon EKS Worker Nodes.

--Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version:  1.0
--==BOUNDARY==
Content-Type: text/cloud-boothook; charset="us-ascii"

#Set the proxy hostname and port
PROXY="proxy.local:3128"
VPC_CIDR=VPC_CIDR_RANGE

#Create the docker systemd directory
mkdir -p /etc/systemd/system/docker.service.d

#Configure yum to use the proxy
cat << EOF >> /etc/yum.conf
proxy=http://$PROXY
EOF

#Set the proxy for future processes, and use as an include file
cat << EOF >> /etc/environment
http_proxy=http://$PROXY
https_proxy=http://$PROXY
HTTP_PROXY=http://$PROXY
HTTPS_PROXY=http://$PROXY
no_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal
NO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal
EOF

#Configure docker with the proxy
tee <<EOF /etc/systemd/system/docker.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF


#Configure the kubelet with the proxy
tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
[Service]
EnvironmentFile=/etc/environment
EOF

--==BOUNDARY==
Content-Type:text/x-shellscript; charset="us-ascii"

#!/bin/bash
set -o xtrace

#Set the proxy variables before running the bootstrap.sh script
set -a
source /etc/environment

/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal
    --exit-code $? \
    --stack  ${AWS::StackName} \
    --resource NodeGroup  \
    --region ${AWS::Region}

--==BOUNDARY==--

5.    To update the aws-node and kube-proxy pods, run the following commands:

kubectl patch -n kube-system -p '{ "spec": {"template": { "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node
kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy

6.    If you change the ConfigMap, apply the updates, and then set the ConfigMap in the pods again to initiate an update as follows:

kubectl set env daemonset/kube-proxy --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'
kubectl set env daemonset/aws-node --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'

Important: You must update all YAML modifications to the kubernetes objects kube-dns or aws-node when these objects are upgraded. To update a ConfigMap to a default value, use the eksctl utils update-kube-proxy or eksctl utils update-aws-node commands.

Important: If the proxy loses connectivity to the API server, then the proxy becomes a single point of failure and your cluster's behavior can become unpredictable. For this reason, it's a best practice to run your proxy behind a service discovery namespace or load balancer, and then scale as needed.


Did this article help you?

Anything we could improve?


Need more help?