AWS Open Source Blog
Provision AWS Services Through Kubernetes Using the AWS Service Broker
IMPORTANT NOTE – Oct 12, 2018
The steps described in this post are no longer accurate, please refer to the AWS Service Broker GitHub project for up-to-date installation instructions. We’ll be updating this post soon.
There’s no doubt that containers have changed how we build projects. One of the guiding principles of a containerized workflow approach has been to give back control to the developer, allowing them to choose their dependencies and how to consume them – most importantly, when they need them. Nowadays, no one can wait three weeks for an ops team to provision a database.
It’s no surprise, then, that the community needed to come up with a way to make sure that, no matter where your containers are run, you will always be able to control your external dependencies in a predictable and simple way. The solution: the Open Service Broker (OSB) API.
Today, I would like to introduce you to the AWS Service Broker, an implementation of the OSB API that will allow you to provision AWS services like RDS and EMR directly through any platform supporting the OSB API. Currently, that list includes Kubernetes, OpenShift, and Cloud Foundry.
We announced the AWS Service Broker at re:Invent 2017 with support for ten initial services. We added an additional eight services in April this year, and we continue to add support for more AWS services on a regular cadence.
The architecture behind the service broker approach in Kubernetes is pretty simple. Kubernetes has the Service Catalog project that will allow OSB compliant service brokers to register their list of available services with the catalog. Any user in the platform with the correct permissions can then make a request to the service catalog for any of the available service plans. The broker will provision the service and bind the returned information to a set of secrets.
I’ve always felt that the best way to explain something is to show how it works. So, let’s jump straight in so you can go and try it yourself.
What you’ll need
There are a few things you will need in order to follow along with this blog post. I won’t be covering the installation or deployment of these dependencies, but there is a whole list of resources available online that will help you get these set up.
- AWS Account with the ability to create IAM permissions
- kops Cluster (Kubernetes v1.9.3)
- Helm v2.9.0-rc5
- AWS CLI v1.15.11
- Python 2.7.13+
Install the Kubernetes Service Catalog
The Kubernetes Service Catalog is the mechanism through which all services are advertised to the Kubernetes platform. It is the Service Catalog which communicates with the AWS Service Broker when managing AWS Services. There are a variety of ways to install the Service Catalog; I personally find using Helm to be the simplest. The Service Catalog has a CLI called svcat
that makes this process even easier.
Download the svcat CLI
This step will download the svcat CLI for Linux but it has a release for every major OS. For full installation instructions, take a look at the documentation here. If you are using Linux, you can run these commands:
Add the Service Catalog chart repository to Helm and install Service Catalog
To check whether the installation was successful, you can list the pods launched into the catalog namespace:
Permissions, Permissions
Now that you have the Kubernetes Service Catalog deployed, we need to make sure that the AWS Service Broker has the correct permissions to launch AWS Services into your AWS Account.
The AWS Service Broker can take permissions in one of two ways:
- Statically configuring credentials in the config file (works well for on-prem deployments)
- Follow the AWS SDK Credential Provider Chain (best practice when deployed on AWS)
The AWS Service Broker uses CloudFormation to manage the lifecycle of any resources created in your AWS account, so we need to create a role that CloudFormation will assume when a service is created.
Download the templates and definitions you will use during this walkthrough
Copy the value of the ARN; you will need it in a later step where I reference ${CFN_POLICY_ARN}
Create new role and attach the policies
In this section, we will create the CloudFormation role which will be assumed by the service broker and attach the newly created policy to it. We will also edit the kops config to add additional node roles.
Copy down the role ARN. You will need this later where I reference ${CFN_ROLE_ARN}
.
Now, attach the policy we created earlier to the new role:
There will be no output from the CLI if it worked, so don’t expect anything to return if successful!
Edit kops cluster config with additional node permissions
We now need to edit the kops cluster configuration to add additional permissions to the kops deployed nodes. We do this using the kops CLI:
This will open your $EDITOR
to the kops
cluster manifest file. In this file, under .spec
, we’re going to add the following.
Inside the tarball you downloaded, there is an example of a complete config file saved as kops-config-example.yaml.
Save the file using the write to file command in your $EDITOR
and then update the cluster:
Once the update is done, confirm that the additional policy has been attached to the kops node role. You should now see a policy called additional.nodes.${CLUSTER_NAME}.
Install the AWS Service Broker
To make it a little easier, we have created some scripts that will deploy the AWS Service Broker into your Kubernetes cluster. First, download the zip file:
Inside this new folder you will find a YAML file called k8s-variables. Open the file and edit the following config mappings:
-
- aws_cloudformation_role_arn: ${CFN_ROLE_ARN}
- region: YOUR_REGION
- vpc_id: VPC_IN_WHICH_KOPS_IS_RUNNING
Leave the rest of the config file as-is.
Now run the installer script.
When the installer completes, check that the AWS Service Broker pods are running and that the service has been created
Confirm that the AWS Service Broker is registered with Service Catalog
Now that the AWS Service Broker is deployed and running, we can confirm that it has been registered with Service Catalog and see a list of services it makes available.
Provision a new SQS queue
Let’s go ahead and provision a simple SQS queue to which we can later post messages. Create a file called provision-sqs.yaml with this content:
Now apply the changes using kubectl, and check whether the provisioning succeeded
You can also confirm that the SQS queue has been created by using the AWS CLI
Bind the provisioned service for use
Now that the service has been provisioned, we need to bind it so that we can get access to the queue. During the binding process, the broker will create a new set of secrets that you can consume in any pod in your cluster.
Create a file called sqs-demo-binding.yaml with this content:
Now apply the changes using kubectl:
Let’s confirm that the binding was successful:
There should now be a newly-created secret that contains all the information required to consume this service.
Attach the secret to any pod
Now that you have the bound secret, you can map it to any pod in your Kubernetes cluster like any other secret. The example below will map the QUEUE_URL
and QUEUE_ARN
environment variables inside a pod to the QueueURL
and QueueARN
keys in the os-blog-sqs-binding secret
:
For more information on how the mapping of secrets in Kubernetes work, I suggest you read the official documentation here
And that’s all folks!
Hopefully, you now understand the workflow of provisioning a new AWS Service through Kubernetes using the AWS Service broker and how to consume it inside your application.
Keep an eye out on our Open Source Blog, we will be sharing some patterns we see our customers adopting, complete with sample applications and walkthroughs.