Self-service AWS native service adoption in OpenShift using ACK

AWS Controllers for Kubernetes (ACK) is an open-source project that allows you to define and create AWS resources directly from within OpenShift. Using ACK, you can take advantage of AWS-managed services to complement the application workloads running in OpenShift without needing to define resources outside of the cluster or run services that provide supporting capabilities like databases or message queues.

Customers running OpenShift on AWS can choose from deploying self-managed Red Hat OpenShift Container Platform or managed OpenShift in the form of the Red Hat OpenShift Service on AWS (ROSA).

ROSA provides an integrated experience to use OpenShift, making it easier for you to focus on deploying applications and accelerating innovation by moving the cluster lifecycle management to Red Hat and AWS. With ROSA, you can run containerized applications with your existing OpenShift workflows and reduce the complexity of management.

Amazon Controller for Kubernetes has now been integrated into OpenShift and is being used to provide a broad collection of AWS native services now available on the OpenShift OperatorHub.

Screenshot of collection ACK Operators available on Red Hat OpenShift console

ACK Operators available on Red Hat OpenShift console

In this post, I will describe how to connect an application or pod running on ROSA to an Amazon Relational Database (Amazon RDS) for MySQL provisioned and configured using ACK service controllers. In this use case, both ROSA and the RDS instance are in their own dedicated VPC.

For this use case, I will use AWS Controllers for Kubernetes – Amazon EC2 (ACK EC2) and AWS Controllers for Kubernetes – Amazon RDS (ACK RDS).


  • AWS account
  • ROSA enabled in the AWS account
  • A ROSA cluster created
  • Access to Red Hat OpenShift console


Here are the steps I followed to demonstrate connecting an Amazon RDS for MySQL database from an application running on a ROSA cluster:

  1. Create a ROSA cluster. The ROSA cluster installation will create the VPC’s cluster as well.
  2. Install AWS Controllers for Kubernetes.
    1. Install AWS Controllers for Kubernetes – Amazon EC2 (ACK EC2) for creating the VPC, subnet, and VPC security groups for the RDS instance.
    2. Install AWS Controllers for Kubernetes – Amazon RDS (ACK RDS) to create the RDS instance.
  3. Provision Amazon RDS for MySQL.
  4. Connect the ROSA cluster VPC and RDS VPC using VPC peering.
  5. Validate the connection to RDS.

Create a ROSA cluster

The most common deployment pattern is to deploy ROSA with —STS  AWS Security Token Service (STS). The official user guide for provisioning a ROSA cluster using the STS workflow guide is here.

❯ export ROSA_CLUSTER_NAME=<cluster-name>
❯ rosa create account-roles --mode auto --yes --prefix <your_prefix>
❯ rosa create cluster --cluster-name $ROSA_CLUSTER_NAME --sts --mode auto --yes
❯ rosa list clusters
ID                                NAME            STATE
1pq5i7vujlhm5doc6neoci7eu6qaqcgo  <cluster-name>  ready
❯ rosa create admin --cluster=$ROSA_CLUSTER_NAME -p <password>
W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information.
I: Admin account has been added to cluster '<cluster-name>'.
I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
I: To login, run the following command:
   oc login https://api.<cluster-name> --username cluster-admin --password <password>
❯ oc login https://api.<cluster-name> --username cluster-admin --password <password>
Login successful.
You have access to 92 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".

Get the Red Hat OpenShift console. Use cluster-admin user name and password to sign in to the console.

❯ rosa describe cluster -c $ROSA_CLUSTER_NAME | grep Console
Console URL: https://console-openshift-console.apps.<cluster-name>

Install AWS Controllers for Kubernetes

Before installing the ACK service controllers, some preinstall steps are necessary. An AWS Identity and Access Management (IAM) user needs to be created and policies attached. Please refer to the ACK documentation for this.

Create the installation namespace

This is the namespace for installing the ACK service controllers.

❯ oc new-project ack-system

Bind an AWS IAM principal to a service user account

Create a user with aws CLI named ack-service-controller. This user will be used by both ACK EC2 and ACK RDS.

❯ aws iam create-user --user-name ack-service-controller

Enable programmatic access for the user you just created:

❯ aws iam create-access-key --user-name ack-service-controller
    "AccessKey": {
        "UserName": "ack-service-controller",
        "AccessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "Status": "Active",
        "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
        "CreateDate": "2022-01-10T16:39:51+00:00"

Save AccessKeyId and SecretAccessKey to be used later.

Create ack-config and ack-user-secrets for authentication

Create a file named config.txt with the following variable, leaving ACK_WATCH_NAMESPACE blank so the controller can properly watch all namespaces, and change any other values to suit your needs:

❯ cat <<EOF > ./config.txt

Use the config.txt to create a ConfigMap in your ROSA cluster:

❯ oc create configmap \
--namespace ack-system \
--from-env-file=config.txt ack-user-config

Create another file called secrets.txt with the following authentication values.

❯ cat <<EOF > secrets.txt

I am going to store these credentials within the OpenShift secrets store.

❯ oc create secret generic \
--namespace ack-system \
--from-env-file=secrets.txt ack-user-secrets

Delete config.txt and secrets.txt.

Attach ack-service-controller user to the IAM policies

You need to attach the IAM to the required policies. Because you will use ACK EC2 and ACK, you will attach the following IAM policies.

❯ aws iam attach-user-policy \
--user-name ack-service-controller \
--policy-arn 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'

❯ aws iam attach-user-policy \
--user-name ack-service-controller \
--policy-arn 'arn:aws:iam::aws:policy/AmazonRDSFullAccess'

Note: You can choose to attach to the user custom or specific policies required by your needs.

Once the above has been completed, the AWS native services operators can be consumed from the OpenShift OperatorHub. Open the Red Hat OpenShift console using username and go to the OperatorHub. Filter items using aws keyword. A list with all the ACK service controllers available will pop up on the screen.Screenshot of all available ACK service controllers

As I mentioned at the beginning of the post, from this list, you will install AWS Controllers for Kubernetes – Amazon EC2 (ACK EC2) and AWS Controllers for Kubernetes – Amazon RDS (ACK RDS).

Install AWS Controllers for Kubernetes – Amazon EC2 (ACK EC2)

Leave all the parameters as default and select Install.

After a short while, you should notice the operator is installed. screenshot confirming operator installed

Do the same for installing AWS Controllers for Kubernetes – Amazon RDS (ACK RDS)

Install AWS Controllers for Kubernetes – Amazon RDS (ACK RDS)

Choose the OperatorHub on the left navigation pane, filter operators list by aws keyword, and choose AWS Controllers for Kubernetes – Amazon RDS.

Select Install, leaving all the parameters as default.

After a short while, the ACK RDS operator should be ready.

If you choose Installed Operators on the left navigation pane, you should be able to confirm both operators are installed and ready.

Or you can verify this using oc CLI.

❯ oc get pod -n ack-system
NAME                                   READY  STATUS    RESTARTS  AGE
ack-ec2-controller-5c647cc8c5-wwgck    1/1    Running   0         13m
ack-rds-controller-5b5dd4745-7lj5f     1/1    Running   0         6m50s

Set up Amazon RDS for MySQL

Before starting to create AWS resources using Kubernetes manifests, create a namespace for deploying them.

❯ oc new-project ack-workspace

Create the VPC

I am going to create a separate Amazon VPC for the RDS instance with CIDR block

❯ cat <<EOF > ./rds-vpc.yaml
kind: VPC
  name: rosa-rds-vpc
  namespace: ack-workspace
    vpc: rosa-rds
  enableDNSHostnames: true
  enableDNSSupport: true
❯ oc apply -f ./rds-vpc.yaml created
❯ export RDS_VPC_ID=$(oc get -ojson | jq -r .status.vpcID)

You can validate VPC creations using aws CLI.

❯ aws ec2 describe-vpcs --vpc-ids $RDS_VPC_ID
    "Vpcs": [
            "CidrBlock": "",
            "DhcpOptionsId": "dopt-0efa9af123cd44fa9",
            "State": "available",
            "VpcId": "vpc-09f8cdf865c44d5b0",
            "OwnerId": "637075021655",
            "InstanceTenancy": "default",
            "CidrBlockAssociationSet": [
                    "AssociationId": "vpc-cidr-assoc-06303b36248cf51ec",
                    "CidrBlock": "",
                    "CidrBlockState": {
                        "State": "associated"
            "IsDefault": false

Create the subnets

Next, you will create subnets in two Availability Zones so that you can take advantage of the resilience provided by multi-AZ RDS. These subnets will be used when creating a DB subnet group. A DB subnet group is a collection of subnets (typically private) that you create in a VPC and that you then designate for your DB instances. A DB subnet group allows you to specify a particular VPC when creating DB instances using the CLI or API.

❯ cat <<EOF > ./rds-subnet-a.yaml
kind: Subnet
  name: rds-subnet-eu-west-2a
  namespace: ack-workspace
  vpcID: $RDS_VPC_ID
  availabilityZone: eu-west-2a
❯ oc apply -f ./rds-subnet-a.yaml created
❯ cat <<EOF > ./rds-subnet-b.yaml
kind: Subnet
  name: rds-subnet-eu-west-2b
  namespace: ack-workspace
  vpcID: $RDS_VPC_ID
  availabilityZone: eu-west-2b
❯ oc apply -f ./rds-subnet-b.yaml created
❯ export RDS_SUBNET_A_ID= $(oc get -ojson | jq -r .status.subnetID)
❯ export RDS_SUBNET_B_ID= $(oc get -ojson | jq -r .status.subnetID)

Create DB subnet group

Here, you will create the DB subnet group.

❯ cat <<EOF > ./db-subnet-group.yaml
kind: DBSubnetGroup
  name: rdsdbsubnetgroup
  namespace: ack-workspace
  description: RDS DB subnet group
  name: rdsdbsubnetgroup
❯ oc apply -f ./db-subnet-group.yaml created

Create VPC security group

Before creating the DB instance, you must create a VPC security group to associate it with the DB instance.

❯ cat <<EOF > ./vpc-security-group.yaml
kind: SecurityGroup
  name: rdssecuritygroup
  namespace: ack-workspace
  description: RDS VPC security group
  name: rdssecuritygroup
  vpcID: $RDS_VPC_ID
❯ oc apply -f vpc-security-group.yaml created
❯ export RDS_VPC_SECURITY_GROUP_ID= $(oc get -ojson | jq -r

Create the RDS DB instance

Create a Secret to store the master admin for the RDS DB instance.

Create the DB instance

❯ cat <<EOF > ./rds-db-instance.yaml
kind: DBInstance
  name: rdsmysqldb
  namespace: ack-workspace
    key: password
    name: db-admin-pass
    namespace: ack-workspace
  engine: mysql
  dbInstanceClass: db.t2.micro
  dbInstanceIdentifier: rdsmysqldbinstance
  port: 3306
  multiAZ: true
  dbName: rdsmysqldb
  dbSubnetGroupName: rdsdbsubnetgroup
  allocatedStorage: 5
  engineVersion: 5.7.36
  masterUsername: mydbadmin
  maxAllocatedStorage: 10
❯ oc apply -f ./rds-db-instance.yaml created

You can check that the DB instance was created in the UI or using aws CLI.

❯ aws rds describe-db-instances --db-instance-identifier rdsmysqldbinstance
    "DBInstances": [
            "DBInstanceIdentifier": "rdsmysqldbinstance",
            "DBInstanceClass": "db.t2.micro",
            "Engine": "mysql",
            "DBInstanceStatus": "available",
            "MasterUsername": "mydbadmin",
            "DBName": "rdsmysqldb",
            "Endpoint": {
                "Address": "",
                "Port": 3306,
                "HostedZoneId": "Z1TTGA775OQIYO"
            ...some output truncated...

Connect the ROSA cluster VPC and RDS VPC using VPC peering

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account. The VPCs can be in different Regions (also known as an inter-Region VPC peering connection).

Create and accept a VPC peering connection between ROSA VPC and RDS VPC

Get the VpcId of the ROSA VPC.

❯ ROSA_VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values="$ROSA_CLUSTER_NAME-*" –query "Vpcs[].VpcId" --output text)

Create the peering connection between ROSA VPC and RDS VPC.

❯ aws ec2 create-vpc-peering-connection --vpc-id $ROSA_VPC_ID --peer-vpc-id $RDS_VPC_ID
❯ VPC_PEER_ID=$(aws ec2 describe-vpc-peering-connections --query "VpcPeeringConnections[].VpcPeeringConnectionId" --output text)

Accept the VPC peering connection.

aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id $VPC_PEER_ID

Update ROSA VPC route table

Get the route associated with the three public subnets of the ROSA VPC and add a route to the table so that all traffic to the RDS VPC CIDR block is via the VPC peering connection.

❯ ROSA_ROUTE_TABLE_ID=$(aws ec2 describe-route-tables --filters Name=tag:Name,Values="$ROSA_CLUSTER_NAME-*-public" --query 'RouteTables[].RouteTableId' --output text)
❯ aws ec2 create-route --route-table-id ${ROSA_ROUTE_TABLE_ID} --destination-cidr-block --vpc-peering-connection-id ${VPC_PEER_ID}

Update RDS instance security group

Update the security group to allow ingress traffic from the ROSA cluster to the RDS instance on port 3306.

❯ aws ec2 authorize-security-group-ingress --group-id ${RDS_VPC_SECURITY_GROUP_ID} --protocol tcp --port 3306 --cidr

Validate the connection to RDS

Now you are ready to validate the connection to the RDS MySQL database from a pod running on the ROSA cluster.

Create a Kubernetes service named mysql-service of type ExternalName, aliasing the RDS endpoint.

❯ DB_INST_ENDPOINT=$(aws rds describe-db-instances --db-instance-identifier rdsmysqldbinstance --query "DBInstances[].Endpoint[].Address" --output text)
❯ cat <<EOF > ./mysql-service.yaml
apiVersion: v1
kind: Service
        app: mysql-service
  name: mysql-service
  namespace: ack-workspace
  externalName: $DB_INST_ENDPOINT
        app: mysql-service
  type: ExternalName
  loadBalancer: {}
❯ oc apply -f ./mysql-service.yaml

Connect to the RDS MySQL database from a pod using mysql-57-rhel7 image.

❯ oc run -i --tty --rm debug --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
sh-4.2$ which mysql
sh-4.2$ mysql -h -u mydbadmin -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 232
Server version: 5.7.36-log Source distribution

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases
    -> ;
| Database           |
| information_schema |
| innodb             |
| mysql              |
| performance_schema |
| rdsmysqldb         |
| sys                |
6 rows in set (0.00 sec)

mysql> quit
sh-4.2$ exit
pod "debug" deleted


Customers looking to further modernize their application stacks by using AWS native services to complement their application workloads running in OpenShift can now use AWS service operators powered by the Amazon Controller for Kubernetes. This provides a prescriptive self-service approach where application owners do not need to leave the familiar interface and context of Kubernetes and OpenShift.

Ovidiu Valeanu

Ovidiu Valeanu

Ovidiu Valeanu is a Senior Specialist Solutions Architect, Containers focused on Kubernetes, Data Analytics and Machine Learning at Amazon Web Services. He enjoys collaborating on Open-Source projects and helping teams design, build, and scale distributed systems.

Ryan Niksch

Ryan Niksch

Ryan Niksch is a Partner Solutions Architect focusing on application platforms, hybrid application solutions, and modernization. Ryan has worn many hats in his life and has a passion for tinkering and a desire to leave everything he touches a little better than when he found it.