AWS Open Source Blog

Managing hybrid storage in an increasingly agile time with OpenShift Container Storage on AWS

This article is a guest post from Mayur Shetty, a Senior Solution Architect within Red Hat’s Global Partners and Alliances organization.

According to the 2019 CNCF Survey, 84% of customers surveyed have containers workloads in production, which is a dramatic increase from 18% in 2018. This increase is driven by a customer need to be more efficient and agile, and the need to operate in a hybrid context to meet the growing nature and demands of the industry. The survey also shows 41% of customers with a hybrid use case, up from 21% in 2019. Additionally, 27% of surveyed customers are making use of daily release cycles.

As customers adopt these processes, managing storage in a consistent manner becomes a greater challenge.

Customers across varied industry verticals have been making use of Red Hat OpenShift on Amazon Web Services (AWS) to meet their hybrid and agility needs. Red Hat now further enables customers through a new open source solution: OpenShift Container Storage (OCS) operator.

OpenShift Container Storage is persistent software-defined storage integrated with and optimized for OpenShift. OpenShift Container Storage runs anywhere OpenShift does, which works well for customers with a hybrid use case and a desire for consistency when implementing and managing are across varied environments. Built on Red Hat Ceph Storage and NooBaa Multi-Cloud Object Gateway (MCG) technology, OpenShift Container Storage is engineered, tested, and qualified to work with Red Hat OpenShift Container Platform on AWS as well as other environments within a customer’s hybrid architecture.

In this article, we will explain how to:

  • Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS.
  • Validate deployment of containerized Ceph and NooBaa.
  • Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

We will be using OpenShift Container Platform (OCP) 4.x and the OCS operator to deploy Ceph and the Multi-Cloud-Gateway (MCG) as a persistent storage solution for OCP workloads. After deploying OpenShift 4, follow the instructions for AWS Installer-Provisioned Infrastructure (IPI).

Here you will be using OpenShift Container Platform (OCP) 4.x and the OCS operator to deploy Ceph and the Multi-Cloud-Gateway (MCG) as a persistent storage solution for OCP workloads. You can deploy OpenShift 4 using this link OpenShift 4 Deployment (http://try.openshift.com/) and then follow the instructions for AWS Installer-Provisioned Infrastructure (IPI).

Deploy the storage backend using the OCS operator scale OCP cluster and add three new nodes

With the increased production use of containers and hybrid solutions, confirming that the implementation of OpenShift on AWS takes advantage of multiple availability zones to cater for resilience is a good idea.

First let’s validate that the OCP environment has three master and three worker nodes before increasing the cluster size by additional three worker nodes for OCS resources. The NAME of our OCP nodes will be different than shown in the example output.

oc get nodes

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-134-127.ec2.internal Ready master 23m v1.16.2
ip-10-0-137-130.ec2.internal Ready worker 14m v1.16.2
ip-10-0-144-238.ec2.internal Ready master 23m v1.16.2
ip-10-0-151-202.ec2.internal Ready worker 14m v1.16.2
ip-10-0-164-30.ec2.internal Ready master 23m v1.16.2
ip-10-0-174-79.ec2.internal Ready worker 14m v1.16.2

OpenShift 4 allows customers to scale clusters in the same manner they are used to on the cloud by means of MachineSets. MachineSets provide min, max, desired values, and scaling metrics—which AWS customers are already familiar with—within OpenShift itself.

Next we will add three more OCP compute nodes to the cluster using MachineSets.

oc get machinesets -n openshift-machine-api | grep -v infra

This will show us the existing MachineSets used to create the three worker nodes in the cluster already. There is a MachineSet for each AWS Availability Zone (us-east-1a, us-east-1b, us-east-1c). Your MachineSet’s NAME will be different from that below:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get machinesets -n openshift-machine-api | grep -v infra
NAME DESIRED CURRENT READY AVAILABLE AGE
mytest-nvnjt-worker-us-east-1a 1 1 1 1 27m
mytest-nvnjt-worker-us-east-1b 1 1 1 1 27m
mytest-nvnjt-worker-us-east-1c 1 1 1 1 27m
mytest-nvnjt-worker-us-east-1d 0 0 27m
mytest-nvnjt-worker-us-east-1e 0 0 27m
mytest-nvnjt-worker-us-east-1f 0 0 27m

Be sure to do the next step for finding and using the CLUSTERID:

CLUSTERID=$(oc get machineset -n openshift-machine-api -o jsonpath='{.items[0].metadata.labels.machine\.openshift\.io/cluster-api-cluster}')
echo $CLUSTERID
curl -s curl -s https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/cluster-workerocs.yaml | sed "s/CLUSTERID/$CLUSTERID/g" | oc apply -f -

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ curl -s https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/cluster-workerocs.yaml | sed "s/CLUSTERID/$CLUSTERID/g" | oc apply -f -
machineset.machine.openshift.io/mytest-nvnjt-workerocs-us-east-1a created
machineset.machine.openshift.io/mytest-nvnjt-workerocs-us-east-1b created
machineset.machine.openshift.io/mytest-nvnjt-workerocs-us-east-1c created

Check to confirm new machines were created:

oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'

They may be in pending for some time, so repeat the command above until they are in a running STATE. The NAME of your machines will be different than what is shown in the example.

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'
NAME                                         PHASE          TYPE           REGION         ZONE           AGE
mytest-nvnjt-workerocs-us-east-1a-7sklp   Provisioned   m4.4xlarge   us-east-1   us-east-1a   2m6s
mytest-nvnjt-workerocs-us-east-1b-jpphq   Provisioned   m4.4xlarge   us-east-1   us-east-1b   2m5s
mytest-nvnjt-workerocs-us-east-1c-pclz7   Provisioned   m4.4xlarge   us-east-1   us-east-1c   2m5s
Mayurs-MacBook-Pro:4.3 mshetty$ oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'
NAME                                         PHASE  TYPE           REGION         ZONE           AGE
mytest-nvnjt-workerocs-us-east-1a-7sklp   Running   m4.4xlarge   us-east-1   us-east-1a   3m46s
mytest-nvnjt-workerocs-us-east-1b-jpphq   Running   m4.4xlarge   us-east-1   us-east-1b   3m45s
mytest-nvnjt-workerocs-us-east-1c-pclz7   Running   m4.4xlarge   us-east-1   us-east-1c   3m45s

learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

The OCS worker machines are using the Amazon Elastic Compute Cloud (Amazon EC2) instance type m4.4xlarge. The m4.4xlarge instance type follows our recommended instance sizing for OCS: 16 vCPU and 64GB RAM.

Now we want to check that our new machines are added to the OCP cluster.

watch "oc get machinesets -n openshift-machine-api | egrep 'NAME|workerocs'"

This step could take more than five minutes. The result of this command needs to be like the example below before we proceed. All new OCS worker MachineSets should have an integer, in this case 1, filled out for all rows and under columns READY and AVAILABLE. The NAME of our MachineSets will be different from what is in the example.

Every 2.0s: /Users/mshetty/AWS/OpenShift4/4.3/oc get machinesets -n openshift-machine-api | egrep 'NAME|workerocs'                                                                 Mayurs-MacBook-Pro: Fri Feb  7 10:53:45 2020
 
NAME                                 DESIRED   CURRENT   READY   AVAILABLE   AGE
mytest-nvnjt-workerocs-us-east-1a   1        1      1      1              6m21s
mytest-nvnjt-workerocs-us-east-1b   1        1      1      1              6m20s
mytest-nvnjt-workerocs-us-east-1c   1        1      1      1              6m20s

Exit by pressing Ctrl+C.

Next we’ll check whether we have three new OCP worker nodes. The NAME of our OCP nodes will be different than shown below.

oc get nodes -l node-role.kubernetes.io/worker

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get nodes -l node-role.kubernetes.io/worker
NAME                          STATUS   ROLES AGE    VERSION
ip-10-0-137-130.ec2.internal   Ready worker   30m   v1.16.2
ip-10-0-140-7.ec2.internal    Ready  worker   4m22s   v1.16.2
ip-10-0-151-202.ec2.internal   Ready worker   30m   v1.16.2
ip-10-0-153-95.ec2.internal   Ready  worker   4m22s   v1.16.2
ip-10-0-169-227.ec2.internal   Ready worker   4m24s   v1.16.2
ip-10-0-174-79.ec2.internal   Ready  worker   30m   v1.16.2

Installing the OCS operator

In this section you will be using three of the worker OCP 4 nodes to deploy OCS 4 using the OCS Operator in OperatorHub. The Red Hat documentation for how to install and set up your OpenShift Container Storage environment might help with troubleshooting problems encountered with your setup.

Prerequisites

You must create a namespace called openshift-storage as follows:

  1. Click Administration | Namespaces in the left pane of the OpenShift web console.
  2. Click Create Namespace.
  3. In the Create Namespace dialog box, enter openshift-storage for Name and openshift.io/cluster-monitoring=true for Labels. This label is required to get the dashboards.
  4. Select No Restrictions option for Default Network Policy.
  5. Click Create Namespace.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Now let’s install OpenShift Container Storage using the Red Hat OpenShift Container Platform Operator Hub on AWS:

  1. Log into the Red Hat OpenShift Container Platform web console as user kubeadmin.
  2. Click Operators | OperatorHub.
  3. Search for OpenShift Container Storage Operator from the list of operators and select it.
  4. On the OpenShift Container Storage Operator page, click Install.
    Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.
  5. On the Create Operator Subscription page, the Installation Mode, Update Channel, and Approval Strategy options are available.
    Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

    • Select a specific namespace on the cluster for the Installation Mode option.
    • Select openshift-storage namespace from the drop-down menu.
    • stable-4.2 channel is selected by default for the Update Channel option.
    • Select an Approval Strategy: Automatic if you want OCP to upgrade OpenShift Container Storage automatically, and Manual if you want to upgrade manually.
  6. Click Subscribe.

The Installed Operators page is displayed with the status of the operator.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

The screenshot shows that the AWS S3 Operator gets installed along with the OpenShift Container Storage.

Creating an OCS service

Click on OpenShift Container Storage Operator to get to the OCS configuration screen.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

On the top of the OCS configuration screen, scroll over to the Storage cluster and click on Create OCS Cluster Services. If you do not see Create OCS Cluster Services, refresh your browser window.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Select at least three worker nodes from the available list of nodes for the use of OpenShift Container Storage service.
To select the appropriate worker nodes of your OCP 4 cluster, you can find them by searching for the node label role=storage-node.

oc get nodes --show-labels | grep storage-node |cut -d' ' -f1

Example Output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get nodes --show-labels | grep storage-node |cut -d' ' -f1
ip-10-0-140-7.ec2.internal
ip-10-0-153-95.ec2.internal
ip-10-0-169-227.ec2.internal

Select the three nodes in the search results, then click on the Create button below the dialog box where you selected the three workers with a checkmark.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Mayurs-MacBook-Pro:4.3 mshetty$ oc -n openshift-storage get pods
NAME                                         READY   STATUS                RESTARTS   AGE
aws-s3-provisioner-8d9478d4b-pmhpt           1/1    Running               0       4m52s
csi-cephfsplugin-2vr24                       0/3    ContainerCreating   0         36s
csi-cephfsplugin-96ckc                       0/3    ContainerCreating   0         36s
csi-cephfsplugin-9lrsw                       3/3    Running               0       36s
csi-cephfsplugin-d5qt9                       3/3    Running               0       36s
csi-cephfsplugin-provisioner-5cdcfcc86b-kk42x   4/4  Running               0       35s
csi-cephfsplugin-provisioner-5cdcfcc86b-vnvcs   4/4  Running               0       36s
csi-cephfsplugin-rbfc6                       3/3    Running               0       36s
csi-cephfsplugin-rsqfn                       0/3    ContainerCreating   0         35s
csi-rbdplugin-2cf68                          0/3    ContainerCreating   0         36s
csi-rbdplugin-4bmrp                          3/3    Running               0       36s
csi-rbdplugin-g97tn                          3/3    Running               0       36s
csi-rbdplugin-h5826                          0/3    ContainerCreating   0         36s
csi-rbdplugin-pm94b                          0/3    ContainerCreating   0         36s
csi-rbdplugin-provisioner-8fdc8f955-fmxxp    4/4    Running               0       36s
csi-rbdplugin-provisioner-8fdc8f955-lpktf    4/4    Running               0       36s
csi-rbdplugin-wsh5q                          3/3    Running               0       36s
noobaa-operator-64d88fdc77-zvd4m             1/1    Running               0       55m
ocs-operator-7f56b58d96-ttw76                0/1    Running               0       55m
rook-ceph-mon-a-canary-57cdf75945-7jqbk      0/1    ContainerCreating   0         16s
rook-ceph-mon-b-canary-7744574664-976qz      0/1    Pending               0       5s
rook-ceph-operator-c8785644-dqk6q            1/1    Running               0       55m
Mayurs-MacBook-Pro:4.3 mshetty$

We can create application pods either on OpenShift Container Storage nodes or non-OpenShift Container Storage nodes and run our applications. However, applying a taint to the nodes, marking them for exclusive OpenShift Container Storage use, and not running applications pods on these nodes is recommended. Because the tainted OpenShift nodes are dedicated to storage pods, they will only require an OpenShift Container Storage subscription and not an OpenShift subscription.

To add a taint to a node, use the following command:

Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-140-7.ec2.internal node.ocs.openshift.io/storage=true:NoSchedule
node/ip-10-0-140-7.ec2.internal tainted
Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-153-95.ec2.internal node.ocs.openshift.io/storage=true:NoSchedule
node/ip-10-0-153-95.ec2.internal tainted
Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-169-227.ec2.internal node.ocs.openshift.io/storage=true:NoSchedule
node/ip-10-0-169-227.ec2.internal tainted

Getting to know the storage dashboard

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

You can now check the status of your storage cluster with the OCS-specific dashboards included in your OpenShift web console. To do so, click on Home in the navigation bar, select Dashboards in the left menu, and select Persistent Storage in the top navigation bar of the content page.

Once this is all healthy, you can use the three new Storage Classes created during the OCS 4 install: ocs-storagecluster-ceph-rbd, ocs-storagecluster-cephfs, openshift-storage.noobaa.io. To see these three Storage Classes from the OpenShift web console, expand the Storage menu in the left navigation bar and select Storage Classes. You can also run the command oc -n openshift-storage get sc.

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc -n openshift-storage get sc
NAME                          PROVISIONER                          AGE
gp2 (default)                 kubernetes.io/aws-ebs                161m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com   13m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   13m
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc      8m33s

Be sure the three storage classes are available in your cluster before proceeding.

Also, note that the NooBaa pod used the ocs-storagecluster-ceph-rbd storage class for creating a PVC for mounting to its database container.

Using the Multi-Cloud Object Gateway

Now let’s examine using the Multi-Cloud-Gateway (MCG). Currently the best way to configure the MCG is to use the command-line interface (CLI).

To install the CLI, refer to the Install the NooBaa CLI client section in Red Hat’s documentation. According to the documentation, the Mac steps are:

brew install noobaa/noobaa/noobaa

Mac steps without Homebrew:

curl -s https://api.github.com/repos/noobaa/noobaa-operator/releases/latest | grep "mac" | cut -d : -f 2,3 | tr -d \" | wget -qi - ; mv noobaa-mac-* noobaa ; chmod +x noobaa; sudo mv noobaa /usr/local/bin/

Linux steps:

curl -s https://api.github.com/repos/noobaa/noobaa-operator/releases/latest | grep "linux" | cut -d : -f 2,3 | tr -d \" | wget -qi - ; mv noobaa-linux-* noobaa ; chmod +x noobaa; sudo mv noobaa /usr/bin/

Check that your NooBaa CLI installation was successful with the command noobaa version.

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ noobaa version
INFO[0000] CLI version: 2.0.9                        
INFO[0000] noobaa-image: noobaa/noobaa-core:5.2.11   
INFO[0000] operator-image: noobaa/noobaa-operator:2.0.9

Checking on the MCG status

The MCG status can be checked with the NooBaa CLI. Make sure you are in the openshift-storage project when you execute this command:

noobaa status -n openshift-storage

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ noobaa status -n openshift-storage
INFO[0000] CLI version: 2.0.9 
INFO[0000] noobaa-image: noobaa/noobaa-core:5.2.11 
INFO[0000] operator-image: noobaa/noobaa-operator:2.0.9
INFO[0000] Namespace: openshift-storage 
INFO[0000] 
INFO[0000] CRD Status: 
INFO[0000] ✅ Exists: CustomResourceDefinition "noobaas.noobaa.io"
INFO[0000] ✅ Exists: CustomResourceDefinition "backingstores.noobaa.io"
INFO[0001] ✅ Exists: CustomResourceDefinition "bucketclasses.noobaa.io"
INFO[0001] ✅ Exists: CustomResourceDefinition "objectbucketclaims.objectbucket.io"
INFO[0001] ✅ Exists: CustomResourceDefinition "objectbuckets.objectbucket.io"
INFO[0001] 
INFO[0001] Operator Status: 
INFO[0001] ✅ Exists: Namespace "openshift-storage" 
INFO[0001] ✅ Exists: ServiceAccount "noobaa" 
INFO[0001] ✅ Exists: Role "ocs-operator.v4.2.1-5l6r4" 
INFO[0001] ✅ Exists: RoleBinding "ocs-operator.v4.2.1-5l6r4-noobaa-zx7zx"
INFO[0001] ✅ Exists: ClusterRole "ocs-operator.v4.2.1-cbqcd"
INFO[0001] ✅ Exists: ClusterRoleBinding "ocs-operator.v4.2.1-cbqcd-noobaa-7mwdn"
INFO[0001] ✅ Exists: Deployment "noobaa-operator" 
INFO[0001] 
INFO[0001] System Status: 
INFO[0002] ✅ Exists: NooBaa "noobaa" 
INFO[0002] ✅ Exists: StatefulSet "noobaa-core" 
INFO[0002] ✅ Exists: Service "noobaa-mgmt" 
INFO[0002] ✅ Exists: Service "s3" 
INFO[0002] ✅ Exists: Secret "noobaa-server" 
INFO[0002] ✅ Exists: Secret "noobaa-operator" 
INFO[0002] ✅ Exists: Secret "noobaa-admin" 
INFO[0002] ✅ Exists: StorageClass "openshift-storage.noobaa.io"
INFO[0002] ✅ Exists: BucketClass "noobaa-default-bucket-class"
INFO[0002] ✅ (Optional) Exists: BackingStore "noobaa-default-backing-store"
INFO[0002] ✅ (Optional) Exists: CredentialsRequest "noobaa-cloud-creds"
INFO[0002] ✅ (Optional) Exists: PrometheusRule "noobaa-prometheus-rules"
INFO[0002] ✅ (Optional) Exists: ServiceMonitor "noobaa-service-monitor"
INFO[0003] ✅ (Optional) Exists: Route "noobaa-mgmt" 
INFO[0003] ✅ (Optional) Exists: Route "s3" 
INFO[0003] ✅ Exists: PersistentVolumeClaim "db-noobaa-core-0"
INFO[0003] ✅ System Phase is "Ready" 
INFO[0003] ✅ Exists: "noobaa-admin" 

#------------------#
#- Mgmt Addresses -#
#------------------#

ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mytest.ocp4-test-mshetty.com https://ac3b91407d9cf47dab0e6b905bdf3373-1488350814.us-east-1.elb.amazonaws.com:443]
ExternalIP : []
NodePorts : [https://10.0.140.7:32292]
InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443]
InternalIP : [https://172.30.229.82:443]
PodPorts : [https://10.131.2.14:8443]

#--------------------#
#- Mgmt Credentials -#
#--------------------#

email : admin@noobaa.io
password : YP9xxxxxxxxAX2JXGRw==

#----------------#
#- S3 Addresses -#
#----------------#

ExternalDNS : [https://s3-openshift-storage.apps.mytest.ocp4-test-mshetty.com https://a9ab4c2a18ac943fc99f5bc38214c7e1-2111878007.us-east-1.elb.amazonaws.com:443]
ExternalIP : []
NodePorts : [https://10.0.140.7:30347]
InternalDNS : [https://s3.openshift-storage.svc:443]
InternalIP : [https://172.30.60.121:443]
PodPorts : [https://10.131.2.14:6443]

#------------------#
#- S3 Credentials -#
#------------------#

AWS_ACCESS_KEY_ID : Rg70xxxxxxxxxxx8G0Pl
AWS_SECRET_ACCESS_KEY : fBnxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxRkp7Y

#------------------#
#- Backing Stores -#
#------------------#

NAME TYPE TARGET-BUCKET PHASE AGE
noobaa-default-backing-store aws-s3 noobaa-backing-store-edb0aa52-1c81-41b5-80da-c20f22227bf2 Ready 35m19s

#------------------#
#- Bucket Classes -#
#------------------#

NAME PLACEMENT PHASE AGE
noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 35m20s

#-----------------#
#- Bucket Claims -#
#-----------------#

No OBC's found.

Mayurs-MacBook-Pro:4.3 mshetty$

The NooBaa CLI will first check on the environment and will then print all the information about the environment. Besides the status of the MCG, the second most interesting information for us are the available S3 addresses that we can use to connect to our MCG buckets. We can choose between using the external DNS, which incurs DNS traffic cost, or route internally inside of our OpenShift cluster.

We can get a more basic overview of the MCG status using the Object Storage dashboard. To access this, log into the OpenShift web console, click Home, and select Dashboards. In the main view, select Object Service in the top navigation bar. This dashboard does not provide connection information for the S3 endpoint, but does offer graphs and runtime information about the S3 backend usage.

Creating an Object Bucket Claim

An Object Bucket Claim (OBC) can be used to request an S3-compatible bucket backend for workloads. When creating an OBC, we get a ConfigMap (CM) and a Secret that together contain all the information our application needs to use the object storage service.

Creating an OBC is as simple as using the NooBaa CLI: noobaa obc create test21obc -n openshift-storage.

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ noobaa obc create test21obc -n openshift-storage
INFO[0000] ✅ Created: ObjectBucketClaim "test21obc" 
INFO[0000] 
INFO[0000] NOTE: 
INFO[0000] - This command has finished applying changes to the cluster.
INFO[0000] - From now on, it only loops and reads the status, to monitor the operator work.
INFO[0000] - You may Ctrl-C at any time to stop the loop and watch it manually.
INFO[0000] 
INFO[0000] OBC Wait Ready: 
INFO[0000] ⏳ OBC "test21obc" Phase is "Pending" 
INFO[0004] ✅ OBC "test21obc" Phase is Bound 
INFO[0004] 
INFO[0004] 
INFO[0004] ✅ Exists: ObjectBucketClaim "test21obc" 
INFO[0004] ✅ Exists: ObjectBucket "obc-openshift-storage-test21obc"
INFO[0004] ✅ Exists: ConfigMap "test21obc" 
INFO[0004] ✅ Exists: Secret "test21obc" 
INFO[0004] ✅ Exists: StorageClass "openshift-storage.noobaa.io"
INFO[0004] ✅ Exists: BucketClass "noobaa-default-bucket-class"
INFO[0004] ✅ Exists: NooBaa "noobaa" 
INFO[0004] ✅ Exists: Service "noobaa-mgmt" 
INFO[0004] ✅ Exists: Secret "noobaa-operator" 
INFO[0005] ✈️ RPC: bucket.read_bucket() Request: {Name:test21obc-9493d838-1fd5-4b0a-8984-faa86b25921f}
INFO[0005] ✅ RPC: bucket.read_bucket() Response OK: took 14.8ms


ObjectBucketClaim info:
  Phase               : Bound
  ObjectBucketClaim   : kubectl get -n openshift-storage objectbucketclaim test21obc
  ConfigMap           : kubectl get -n openshift-storage configmap test21obc
  Secret              : kubectl get -n openshift-storage secret test21obc
  ObjectBucket        : kubectl get objectbucket obc-openshift-storage-test21obc
  StorageClass        : kubectl get storageclass openshift-storage.noobaa.io
  BucketClass         : kubectl get -n openshift-storage bucketclass noobaa-default-bucket-class
 
Connection info:
  BUCKET_HOST         : 10.0.140.7
  BUCKET_NAME         : test21obc-9493d838-1fd5-4b0a-8984-faa86b25921f
  BUCKET_PORT         : 30347
  AWS_ACCESS_KEY_ID   : C82xxxxxxxxxxxxxxxx6X
  AWS_SECRET_ACCESS_KEY  : 5tZKxxxxxxxxxxxxxxxxxxxxxDbdkStz
 
Shell commands:
  AWS S3 Alias        : alias s3='AWS_ACCESS_KEY_ID=C82xxxxxxxxxxxxxxxx6X AWS_SECRET_ACCESS_KEY=5tZKxxxxxxxxxxxxxxxxxxxxxDbdkStz aws s3 --no-verify-ssl --endpoint-url https://10.0.140.7:30347'
 
Bucket status:
  Name                : test21obc-9493d838-1fd5-4b0a-8984-faa86b25921f
  Type                : REGULAR
  Mode                : OPTIMAL
  ResiliencyStatus    : OPTIMAL
  QuotaStatus         : QUOTA_NOT_SET
  Num Objects         : 0
  Data Size           : 0.000 B
  Data Size Reduced   : 0.000 B
  Data Space Avail    : 1.000 PB
 

Mayurs-MacBook-Pro:4.3 mshetty$

The NooBaa CLI has created the necessary configuration inside NooBaa and has informed OpenShift about the new OBC:

oc get obc -n openshift-storage

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get obc -n openshift-storage
NAME           AGE
test21obc   2m8s
oc get obc test21obc -o yaml -n openshift-storage

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get obc test21obc -o yaml -n openshift-storage
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  creationTimestamp: "2020-02-07T21:36:10Z"
  finalizers:
  - objectbucket.io/finalizer
  generation: 2
  labels:
       app: noobaa
       bucket-provisioner: openshift-storage.noobaa.io-obc
       noobaa-domain: openshift-storage.noobaa.io
  name: test21obc
  namespace: openshift-storage
  resourceVersion: "84522"
  selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc
  uid: 133ab2c0-6fa0-458d-a5fd-60087a3ac18f
spec:
  ObjectBucketName: obc-openshift-storage-test21obc
  bucketName: test21obc-9493d838-1fd5-4b0a-8984-faa86b25921f
  generateBucketName: test21obc
  storageClassName: openshift-storage.noobaa.io
status:
  phase: Bound
Mayurs-MacBook-Pro:4.3 mshetty$

Inside of your openshift-storage namespace, you will now find the ConfigMap and the Secret to use this OBC. The ConfigMap and the Secret have the same name as the OBC:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get -n openshift-storage secret test21obc -o yaml
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: QzgydxxxxxxxxxxxxxxxxxmNlg=
  AWS_SECRET_ACCESS_KEY: NXRxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxka1N0eg==
kind: Secret
metadata:
  creationTimestamp: "2020-02-07T21:36:10Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
       app: noobaa
       bucket-provisioner: openshift-storage.noobaa.io-obc
       noobaa-domain: openshift-storage.noobaa.io
  name: test21obc
  namespace: openshift-storage
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
       blockOwnerDeletion: true
       controller: true
       kind: ObjectBucketClaim
       name: test21obc
       uid: 133ab2c0-6fa0-458d-a5fd-60087a3ac18f
  resourceVersion: "84517"
  selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc
  uid: 45dfb1c1-1ef9-4c47-b6ea-968ddc13dc7f
type: Opaque
**Mayurs-MacBook-Pro:4.3 mshetty$ oc get -n openshift-storage cm test21obc -o yaml**
 
apiVersion: v1
data:
  BUCKET_HOST: 10.0.140.7
  BUCKET_NAME: test21obc-9493d838-1fd5-4b0a-8984-faa86b25921f
  BUCKET_PORT: "30347"
  BUCKET_REGION: ""
  BUCKET_SUBREGION: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2020-02-07T21:36:10Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
       app: noobaa
       bucket-provisioner: openshift-storage.noobaa.io-obc
       noobaa-domain: openshift-storage.noobaa.io
  name: test21obc
  namespace: openshift-storage
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
       blockOwnerDeletion: true
       controller: true
       kind: ObjectBucketClaim
       name: test21obc
       uid: 133ab2c0-6fa0-458d-a5fd-60087a3ac18f
  resourceVersion: "84518"
  selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc
  uid: 377e26bc-4216-4fb8-89b3-6c4f4557fc53

The secret gives provides the S3 access credentials, while the ConfigMap contains the S3 endpoint information for our application.

Using an OBC inside a container

In this section we will see how to create an OBC using a YAML file and use the provided S3 configuration in an example application.

To deploy the OBC and the example application we apply this YAML file:

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: obc-test
spec:
  generateBucketName: "obc-test-noobaa"
  storageClassName: openshift-storage.noobaa.io
---
apiVersion: batch/v1
kind: Job
metadata:
  name: obc-test
  labels:
    app: obc-test
spec:
  template:
    metadata:
      labels:
        app: obc-test
    spec:
      restartPolicy: OnFailure
      containers:
        - image: mesosphere/aws-cli:latest
          command: ["sh"]
          args:
            - '-c'
            - 'set -x && s3cmd --no-check-certificate --host $BUCKET_HOST:$BUCKET_PORT --host-bucket $BUCKET_HOST:$BUCKET_PORT du'
          name: obc-test
          env:
            - name: BUCKET_NAME
              valueFrom:
                configMapKeyRef:
                  name: obc-test
                  key: BUCKET_NAME
            - name: BUCKET_HOST
              valueFrom:
                configMapKeyRef:
                  name: obc-test
                  key: BUCKET_HOST
            - name: BUCKET_PORT
              valueFrom:
                configMapKeyRef:
                  name: obc-test
                  key: BUCKET_PORT
            - name: AWS_DEFAULT_REGION
              valueFrom:
                configMapKeyRef:
                  name: obc-test
                  key: BUCKET_REGION
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: obc-test
                  key: AWS_ACCESS_KEY_ID
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: obc-test
                  key: AWS_SECRET_ACCESS_KEY

The first part creates an OBC that will create a ConfigMap and a Secret that have the same name as the OBC (obc-test). The second part of the file (after the ), creates a Job that deploys a container with the s3cmd pre-installed. It will execute s3cmd with the appropriate command-line arguments and then exit. s3cmd will in this case report the current disk usage of our S3 endpoint and then exit, which will mark our Pod as Completed. Let’s try this out.

Deploy the manifest:

Mayurs-MacBook-Pro:4.3 mshetty$ curl -s https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/obc_app_example.yaml | oc apply -f -
namespace/obc-test created
objectbucketclaim.objectbucket.io/obc-test created
job.batch/obc-test created

Afterward watch the Pod be created, run, and finally be marked Completed like below; note that your Pod name will differ. Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ oc get pods -n obc-test -l app=obc-test
NAME           READY   STATUS        RESTARTS   AGE
obc-test-j7pvk   0/1  Completed   0         69s

Then you can check the obc-test Pod logs for the contents of the S3 bucket using the command below; in this case there are zero objects in the bucket.

Fetching the obc-test log via the oc command does not work correctly. It does work using the kubectl command.

kubectl logs -n obc-test -l app=obc-test

Example output:

+ s3cmd --no-check-certificate --host 10.0.140.19:30052 --host-bucket 10.0.140.19:30052 du
0        0 objects s3://obc-test-noobaa-784461cb-1e77-4ccf-b62d-007a6ae3ef15/
--------
0        Total

The output shows that we can access one bucket, which is currently empty. This proves that the access credentials from the OBC work and are set up correctly inside of the container.

Most applications support reading out the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables natively, but you will have to figure out how to set the host and bucket name for each application. In the example, we used CLI flags of s3cmd for this.

Learn how to Configure and deploy containerized Ceph and NooBaa within OpenShift on AWS. Validate deployment of containerized Ceph and NooBaa. Use the Multi-Cloud Object Gateway to create an Amazon Simple Storage Service (Amazon S3) backed storage and use in an application.

Conclusion

In this article, we learned how to validate OpenShift cluster readiness, and how to deploy and configure OpenShift Container storage for hybrid workloads running on AWS. The OpenShift Container Storage operator is an open source project, we encourage participation through issues and pull requests via GitHub.

Mayur Shetty

Mayur Shetty

Mayur Shetty is a Senior Solution Architect within Red Hat’s Global Partners and Alliances organization. He has been with Red Hat for four years, where he was also part of the OpenStack Tiger Team. He previously worked as a Senior Solutions Architect at Seagate Technology driving solutions with OpenStack Swift, Ceph, and other Object Storage software. Mayur also led ISV Engineering at IBM creating solutions around Oracle database, and IBM Systems and Storage. He has been in the industry for almost 20 years, and has worked on Sun Cluster software, and the ISV engineering teams at Sun Microsystems.

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

Ryan Niksch

Ryan Niksch

Ryan Niksch is a Partner Solutions Architect focusing on application platforms, hybrid application solutions, and modernization. Ryan has worn many hats in his life and has a passion for tinkering and a desire to leave everything he touches a little better than when he found it.