AWS Open Source Blog

Using Kubernetes Service Catalog and AWS Service Broker on EKS


In a previous post we discussed using the AWS Service Broker on Kubernetes. A lot has changed since then: EKS is now available, and the Service Broker has evolved significantly, so we thought it would be good to revisit the topic with a focus on EKS.

If you are a Kubernetes user, you may have found, as we did, that managing the lifecycle and credentials for disparate services such as databases can be challenging. Aside from having to work out how to provide your application access to needed credentials, it often involves linking multiple disparate tool chains, which adds undifferentiated heavy lifting. This can result in best practices such as least-privilege models being hard to enforce, often being left up to individual development teams to implement. In this post, we will discuss some patterns that help address these challenges. We’ll take a look at Kubernetes Service Catalog, an extension API that enables applications running in Kubernetes clusters to easily use external-managed software offerings, by consuming service brokers that implement the Open Service Broker API specification. The AWS Service Broker is an open source project that (in combination with Kubernetes Service Catalog) provides a catalog of AWS services that can be managed and connected to your Kubernetes applications using familiar Kubernetes APIs and tooling. We’ll also walk through what using the AWS Service Broker looks like in practice, by provisioning an S3 bucket and connecting it to your application.

Core Service Broker Concepts

First, we want to go over some of the key concepts that we’ll be using in this post. For more details, have a look at the Open Service Broker API Specification and Kubernetes Service Catalog design documentation.

ClusterServiceClass – A Kubernetes resource that Service Catalog generates. Service Catalog fetches the catalog of each installed broker and merges the new catalog entries into ClusterServiceClasses in Kubernetes. In the case of the AWS broker, these classes represent AWS services like S3, SQS, etc.

ClusterServicePlan – Each ClusterServiceClass contains one or more plans. A plan represents a configuration of the service; in the AWS broker, many services have opinionated plans for production and development use cases, as well as custom plans that allow the user to configure all of the available options for the service.

ServiceInstance – An instance of a ClusterServicePlan. With the AWS broker, this will represent an instance of the AWS service created when the provision API is called.

Provision – The provision API is what gets called when creating a new ServiceInstance using kubectl, the api or svcat (the Kubernetes Service Catalog CLI tool). In the case of the AWS Service Broker, the provision call accepts various parameters depending on the plan, which are then used to create the requested AWS services.

Bind – Binding is the API call that requests credentials and metadata from the broker for a given ServiceInstance. The Service Catalog then creates a Kubernetes secret and stores the credentials in it. Applications can then map the secret to gain access to the credentials. For example, with an Amazon RDS database, the bind call returns the endpoint url, a username and a password. For services that require IAM at run time, a least-privilege policy is created and attached to an IAM user/role.


  • An EKS cluster with at least one node configured; three or more nodes are recommended for HA/production use. For details, see the EKS Getting Started Guide. The steps in this guide should work for any Kubernetes cluster (v1.10 and up), but your mileage may vary.
  • kubectl and aws-iam-authenticator installed/configured to connect to the above cluster. For details see the reference.
  • awscli installed and configured with AWS credentials. For details, see the AWS CLI documentation.
  • jq is used to inspect the output of the sample application; it can be obtained from your operating system’s package manager

In this post we’ve chosen to use native Kubernetes tools wherever possible, to highlight what the Service Catalog types look like and how they can be used natively. If you are regularly interacting with the catalog via the cli, you may want to look at the svcat cli tool, which simplifies describing and managing service catalog types. See the Service Catalog documentation on GitHub for more details

Installing Service Catalog

We’ll use Helm to handle the installation of the needed components.

curl >
chmod 700
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
# Install helm and tiller into the cluster
helm init --service-account tiller
# Wait until tiller is ready before moving on
until kubectl get pods -n kube-system -l name=tiller | grep 1/1; do sleep 1; done

Once completed, you should see a running tiller pod in kube-system namespace:

running tiller pod in kube-system namespace

NOTE: This type of Tiller installation is not recommended for public-facing or production clusters. To learn more about installing Tiller in production clusters, see Securing your Helm Installation.

Now that we have Helm and Tiller set up, we can move on to using Helm to install Kubernetes Service Catalog.

helm repo add svc-cat
helm install svc-cat/catalog \
    --name catalog --namespace catalog --wait

svc-cat animated gif.

Once install is complete we should see the api-server and controller-manager pods running in the catalog namespace:

Service-Catalog pods running.

Installing the AWS Service Broker

The first step is to set up the prerequisites. This can be done easily using a CloudFormation template that creates the required IAM User and DynamoDB table. The following code block uses the AWS CLI to launch the template and gather the needed outputs. Be sure to set the REGION variable to the AWS region where you would like to have the broker provision resources.

# Download the template
# Create stack
STACK_ID=$(aws cloudformation create-stack \
             --capabilities CAPABILITY_IAM \
             --template-body file://prerequisites.yaml \
             --stack-name  aws-service-broker-prerequisites \
             --output text --query "StackId" \
             --region ${REGION})
# Wait for stack to complete
until \
    ST=$(aws cloudformation describe-stacks \
        --region ${REGION} \
        --stack-name ${STACK_ID} \
        --query "Stacks[0].StackStatus" \
        --output text); \
        echo $ST; echo $ST | grep "CREATE_COMPLETE"
    do sleep 5
# Get the username from the stack outputs
USERNAME=$(aws cloudformation describe-stacks \
             --region ${REGION} \
             --stack-name ${STACK_ID} \
             --query "Stacks[0].Outputs[0].OutputValue" \
             --output text)
# Create IAM access key. Note down the output, we'll need it when setting up the broker
aws iam create-access-key \
    --user-name ${USERNAME} \
    --output json \
    --query 'AccessKey.{KEY_ID:AccessKeyId,SECRET_ACCESS_KEY:SecretAccessKey}'

Now we’re ready to install the broker, first adding the repository to Helm:

helm repo add aws-sb

adding the repository to Helm.

The broker has several configurable properties, to list them run:

helm inspect aws-sb/aws-servicebroker --version 1.0.0-beta.3

helm inspect aws-sb/aws-servicebroker --version 1.0.0-beta.3.

In this post we will cover a minimal installation. For details about advanced install options, see the AWS Service Broker documentation. Replace <ACCESS_KEY_ID> and <SECRET_ACCESS_KEY> below with the output saved from the output of the aws create-access-key command:

helm install aws-sb/aws-servicebroker \
    --wait \
    --name aws-servicebroker \
    --namespace aws-sb \
    --version 1.0.0-beta.3 \
    --set aws.region=${REGION} \
    --set aws.accesskeyid=<ACCESS_KEY_ID> \
    --set aws.secretkey=<SECRET_ACCESS_KEY>

output of helm install aws-sb/aws-servicebroker.

NOTE: If setting aws.targetaccountid on the helm cli, do not use —set, use —set-string, see the related Helm issue for more info

Now verify that the broker pod is running:

helm ls --namespace aws-sb
kubectl get ClusterServiceBrokers

output of helm ls --namespace aws-sb.

output of kubectl get ClusterServiceBrokers.

Now you can list the available services:

kubectl get ClusterServiceClasses \ -o=custom-columns=NAME:.spec.externalName,DESCRIPTION:.spec.description

output of kubectl get ClusterServiceClasses.

If you ran into any issues, you can troubleshoot by having a look at the Service Broker’s logs:

kubectl logs $(kubectl get pods -n aws-sb -o name) -n aws-sb

Provisioning and Binding to a Sample Application

Let’s take the broker for a spin by creating a sample application, provisioning an S3 bucket, and binding the bucket to our application.

First create the sample application; we’ve provided a simple application that validates that the connection to S3 is functional:

cat <<EOF > sample.yaml
apiVersion: v1
kind: Namespace
  name: s3-demo
apiVersion: apps/v1
kind: Deployment
  name: s3-demo
  namespace: s3-demo
      app: s3-demo
  replicas: 1
        app: s3-demo
      - name: s3-demo
        image: awsservicebroker/s3-demo:latest
        - containerPort: 8080
kubectl apply -f sample.yaml
# Wait for deployment to complete
until kubectl get pods -n s3-demo | grep 1/1; do sleep 1; done

Now we can check what the application returns when we curl it. Note that we haven’t set up a Kubernetes Service or Ingress for brevity, so we’ll run the curl using kubectl exec:

kubectl exec \
    $(kubectl get pods -o name --namespace s3-demo | awk -F '/' '{print $2}') \
    --namespace s3-demo -- \
    curl -s | jq .

output of curl using kubectl exec.

The above output shows that we’ve got some missing environment variables. Let’s resolve that by creating our S3 service and binding it to the application

cat <<EOF > s3-instance.yaml
kind: ServiceInstance
  name: s3-bucket
  namespace: s3-demo
  clusterServiceClassExternalName: s3
  clusterServicePlanExternalName: production
kubectl apply -f s3-instance.yaml

That will start the provisioning process. You should see a CloudFormation stack being created in your account:

aws cloudformation list-stacks \
--region ${REGION} \
--query 'StackSummaries[?starts_with(StackName,`aws-service-broker-s3-`)]'

output of aws cloudformation list-stacks.

Shortly after the stack has completed, the ServiceClass will be ready to bind. You can check the status by describing the ServiceInstance and checking .status.conditions[0].message

kubectl get ServiceInstance/s3-bucket -n s3-demo -o yaml

Now that our ServiceInstance has been created, we can bind to it so that our sample application can access those credentials from the kubernetes environment itself. Here’s what we need to do to create a binding resource:

cat <<EOF > s3-binding.yaml
kind: ServiceBinding
  name: s3-binding
  namespace: s3-demo
    name: s3-bucket
kubectl apply -f s3-binding.yaml

Now that we have created a ServiceBinding object, we can verify if the status is “Ready”:

kubectl describe ServiceBinding s3-binding -n s3-demo

output of kubectl describe ServiceBinding s3-binding -n s3-demo.

The binding action creates a Kubernetes secret containing the credentials needed to access the S3 bucket in “s3-demo” namespace. To verify that the secret wss created, run this command:

kubectl describe secrets/s3-binding -n s3-demo

output of kubectl describe secrets/s3-binding -n s3-demo.

All that’s left to do now is to attach the secrets to our pod:

cat <<EOF >> sample.yaml
        - name: S3_AWS_ACCESS_KEY_ID
          valueFrom: { secretKeyRef: { name: s3-binding, key: S3_AWS_ACCESS_KEY_ID  } }
        - name: S3_AWS_SECRET_ACCESS_KEY
          valueFrom: { secretKeyRef: { name: s3-binding, key: S3_AWS_SECRET_ACCESS_KEY } }
        - name: S3_REGION
          valueFrom: { secretKeyRef: { name: s3-binding, key: S3_REGION } }
        - name: BUCKET_ARN
          valueFrom: { secretKeyRef: { name: s3-binding, key: BUCKET_ARN } }
        - name: BUCKET_NAME
          valueFrom: { secretKeyRef: { name: s3-binding, key: BUCKET_NAME } }
        - name: LOGGING_BUCKET_NAME
          valueFrom: { secretKeyRef: { name: s3-binding, key: LOGGING_BUCKET_NAME } }
kubectl apply -f sample.yaml
# wait for deployment to complete
until kubectl get pods -n s3-demo | grep -c s3-demo | grep 1; do sleep 1; done

Once the deployment has completed, we should see that the application is now able to access the bucket:

kubectl exec \
    $(kubectl get pods -o name --namespace s3-demo | awk -F '/' '{print $2}') \
    --namespace s3-demo -- \
    curl -s | jq .


Let’s delete the sample application, binding, and instance. (Notice that the delete call results in the CloudFormation stacks being deleted.)

kubectl delete -f s3-binding.yaml
kubectl delete -f s3-instance.yaml
kubectl delete -f sample.yaml

You’re welcome to play around some more, but if you would prefer to also remove the broker and catalog from your cluster, you can run Helm to do so:

helm delete --purge aws-servicebroker
helm delete --purge catalog
# Remove cloudformation stack containing prerequisites
aws cloudformation delete-stack \
--stack-name $(echo $STACK_ID | awk -F '/' '{print $2'}) --region=$REGION


In this post, we’ve highlighted the power of using Kubernetes Service Catalog and the AWS Service Broker to manage your AWS services, providing separation of code and configuration, as well as providing baked-in best practices like least-privilege IAM. This is just the tip of the iceberg in terms of the functionality provided by the broker! For more advanced scenarios such multi-account provisioning, building your own custom catalog with CloudFormation templates, and using bind to attach to pod-level IAM roles, see the AWS Service Broker documentation. If you’re interested in exploring other ways to integrate AWS services with Kubernetes, we recommend that you also take a look at Chris Hein’s post on the AWS Service Operator.

Jay McConnell

Jay McConnell is a Solutions Architect or the Amazon Partner Network where he specializes in building best practice deployments for partner products on AWS, and in his spare time he maintains as many open source projects as time allows.

Srinivas Reddy Cheruku

Srinivas Reddy Cheruku

Srinivas Reddy Cheruku is a Cloud Support Engineer at AWS since 2018. He specializes in containers (ECS & EKS), and is passionate about helping customers implement best practices in DevOps.

Jay McConnell

Jay McConnell