AWS Open Source Blog

Bringing Cloud Provider Support to Kubernetes on DC/OS

中文版

Mesosphere in September of 2017 announced it was bringing back support for Kubernetes on its popular DC/OS cluster management solution. The beta release supported creating a cluster; you could then SSH tunnel into the nodes and deploy standard Kubernetes primitives to it. As of early January, support was officially added for the --cloud-provider flag, meaning you can utilize standard AWS components such as the Classic Load Balancer (ELB) or Elastic Block Store (EBS). In this post, we’ll set up a DC/OS cluster, then install Beta Kubernetes on top of it, lastly deploying a service which uses the type: LoadBalancer to show how you can add an ELB ingress into your applications running on the DC/OS Kubernetes cluster. This tutorial will take about an hour to complete.

In an earlier post by Michael Ruiz, he explains what Mesosphere’s DC/OS is, and the basics of what it means to run DC/OS on AWS. To summarize: DC/OS is built using the Apache Mesos framework, and was made to allow for simple management of large- and small-scale clusters. After productizing the deployment of Apache Mesos, Mesosphere made it easy to deploy “mesos frameworks” using a push-button catalog called Universe. Within Universe you can find anything from Gitlab to Jenkins to the newly added Beta Kubernetes supported by Mesosphere. Running Mesosphere DC/OS with Kubernetes allows you to co-locate your productionized big data services like Cassandra and Spark with your Kubernetes managed container workloads, integrating directly into the Mesos internal DNS provider.

Getting Started

To get started, we need to create a Mesosphere DC/OS cluster. This can be done using the open source distribution or the commercial enterprise edition; this post will use the open source distribution. First, we’ll head over to the DC/OS CloudFormation installation page and select a region and single or HA master setup; for demonstration purposes, I’m using the single master configuration. Once we’ve chosen a cluster configuration, it will take us to the AWS CloudFormation console where we can configure the stack.

After you have entered in the stack details, you will be taken to the Options screen. On this page you will need to add a tag for KubernetesCluster=DemoCluster. This tag is used by the Kubernetes Cloud Controller Manager to tell Kubernetes what to call the cluster when provisioning resources, as well as what cloud resources it has access to.

If you are familiar with Kubernetes internals, this is the ClusterID. Once this has been added, click Next, then Create on the next screen. After the CloudFormation stack has completed successfully, you will have a base DC/OS cluster. We now need to modify the worker instance role’s inline policy to allow Kubernetes on DC/OS to create ELBs, modify tags and instance attributes, and more. To do this, navigate to the IAM Roles and search for SlaveRole. From here you can modify the instance policy. Copy and paste the below actions and add them to your instance role’s policy.

"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup",
"ec2:DeleteRoute",
"ec2:DescribeRouteTables",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:ModifyInstanceAttribute",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer"

The final policy should look like this:

With the permissions in place, you can log in to your cluster and install the CLI. To get the master endpoint, navigate to the CloudFormation stack and view the Outputs tab. Find the DnsAddress, copy and paste this into a new tab. You will be prompted to set up your bootstrap user. This user is considered the cluster administrator, and is used to add additional users.

Once you have logged in, on the left sidebar click on the cluster name at the top: this will be the name you entered into the CloudFormation template. In the drop-down, select Install CLI. From there you can copy and paste the code snippet that looks like this:

[ -d /usr/local/bin ] || sudo mkdir -p /usr/local/bin && 
curl https://downloads.dcos.io/binaries/cli/darwin/x86-64/dcos-1.10/dcos -o dcos && 
sudo mv dcos /usr/local/bin && 
sudo chmod +x /usr/local/bin/dcos && 
dcos cluster setup http://my-dcos-elb.us-west-1.elb.amazonaws.com && 
dcos

This script will download the latest DC/OS binary, move it into /usr/local/bin, then set up the CLI with the new DC/OS cluster. Before it completes, it will prompt you to open a URL in your browser (if you are on macOS, it will try to open a browser for you). Log in using the same user you initially logged in with. After logging in, you will be presented with a JSON Web Token, (JWT). Copy and paste that into your command prompt to finish the CLI configuration.

Now that we have our cluster created, proper instance policy and the CLI set up, you are ready to deploy Kubernetes. To do this we will use DC/OS’ service catalog. Navigate to Catalog in the left-hand bar, search for “Kubernetes,” and select beta-kubernetes.

On this screen, click Review & Run. You will then be presented with the Review dialog; click Edit in the right-hand bottom corner. This will convert the dialog into a form. You can then select Kubernetes in the left side bar. On this form, you need to enter aws in lowercase characters into The Provider For Cloud Services input field.

After you have edited that field, you can click Review & Run followed by Run Service. This will take you to the Marathon Kubernetes service configuration that is being deployed. After a couple of minutes, you should see multiple Kubernetes processes running.

Deploying Services

Now that we have our Kubernetes on DC/OS booted, it’s time to deploy our services and make sure that everything is set up correctly. In order to test this, we need to use a resource managed by the Cloud Controller Manager. A simple use case is provisioning AWS Load Balancers for Kubernetes Services. In this manifest file I’ve created a basic microservice architecture which deploys three applications: a webapp and two backend services. In this example, we’re utilizing a microservice pattern in which each service is written in its own language, and communicating between them using internal kube-dns. Along with the applications, we have three services used to expose them. If you look carefully, you’ll notice that one of the Services uses type: LoadBalancer which will tell Kubernetes to provision a cloud resource and bind it to the service.

apiVersion: v1
kind: Service
metadata:
  name: webapp
  labels:
    app: webapp
spec:
  selector:
    app: webapp
  ports:
  - port: 80
    targetPort: http-server
    name: http
  - port: 443
    targetPort: http-server
    name: ssl
  type: LoadBalancer

To deploy, we need to install kubectl by following the directions here, we’ll configure it later. We then need to set up an SSH tunnel binding to localhost:9000 to the master nodes so that we can access the Kubernetes apiserver. You can do this by running the commands shown below. Make sure to replace {keyname} and {ipaddress} with the keypair you used for the cluster creation and IP address of your master node respectively.

The IP Address of the master node can be found in the EC2 Console by viewing the master load balancers attached instances.

ssh-add ~/.ssh/{keyname}.pem
export MASTER_IP={ipddress}
ssh -4 -o "UserKnownHostsFile=/dev/null" \
         -o "StrictHostKeyChecking=no" \
         -o "ServerAliveInterval=120" \
         -N -L 9000:apiserver-insecure.kubernetes.l4lb.thisdcos.directory:9000 \
         core@$MASTER_IP

Next we’ll configure your local kubectl by running:

kubectl config set-cluster dcos-k8s --server=http://localhost:9000
kubectl config set-context dcos-k8s --cluster=dcos-k8s --namespace=default
kubectl config use-context dcos-k8s

Now we can deploy the kube-dns dns addon using this manifest file to install use kubectl apply -f URL like so:

$ kubectl apply -f https://raw.githubusercontent.com/christopherhein/aws-kubernetes-on-dcos/master/kube-dns.yml
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment "kube-dns" created

Next, we deploy the applications and services, again using kubectl apply -f URL and then watch the service endpoint for the Cloud Controller Manager to provision and associate the ELB with the Kubernetes service.

$ kubectl apply -f https://raw.githubusercontent.com/christopherhein/aws-kubernetes-on-dcos/master/manifest.yml
service "tracks" created
deployment "tracks" created
service "laptimes" created
deployment "laptimes" created
service "webapp" created
deployment "webapp" created
$ kubectl get svc -o wide -w
NAME         CLUSTER-IP       EXTERNAL-IP                       PORT(S)                      AGE       SELECTOR
kubernetes   10.100.0.1       <none>                            443/TCP                      9h        <none>
laptimes     10.100.248.179   <none>                            5000/TCP                     6h        app=laptimes
tracks       10.100.19.121    <none>                            4567/TCP                     6h        app=tracks
webapp       10.100.159.107   xxx.us-west-1.elb.amazonaws.com   80:32027/TCP,443:30090/TCP   6h        app=webapp

After you see the the EXTERNAL-IP change from &lt;pending&gt; to xxx.us-west-1.elb.amazonaws.com the services will be ready, and you can open a browser and navigate to the URL.

Teardown

After you have finished with the demo and are ready to destroy the cluster, we first need to remove the Kubernetes resources, which will remove the cloud resources it created. (If this step is not completed, the CloudFormation will fail to delete and you will need to manually remove the resources that were created.)

kubectl delete -f https://raw.githubusercontent.com/christopherhein/aws-kubernetes-on-dcos/master/manifest.yml

Then we can navigate back to the CloudFormation console and delete the DC/OS CloudFormation stack.

Conclusion

Now that you’ve deployed your DC/OS cluster and your subsequent Kubernetes cluster, and learned how to expose services to the world, check out Shipping With Porpoise from re:Invent 2017, where I talk about automating and productionizing your CI/CD pipeline using Jenkins, Twistlock, and Weaveworks on DC/OS to submit applications to Kubernetes.

Read more from Chris.

Chris Hein

Chris Hein

Chris Hein is a Sr. Developer Advocate for Kubernetes/EKS at Amazon Web Services. Before Amazon, Chris worked for a number of large and small companies like GoPro, Sproutling, & Mattel. Read More from Chris here https://aws.amazon.com/blogs/opensource/author/heichris/ and follow him at @christopherhein