AWS Architecture Blog

Deploying IBM Cloud Pak for integration on Red Hat OpenShift Service on AWS

Customers across many industries use IBM integration software, such as IBM MQ, DataPower, API Connect, and App Connect, as the backbone that integrates and orchestrates their business-critical workloads.

These customers often tell Amazon Web Services (AWS), they want to migrate their applications to AWS Cloud, as part of their business strategy: to lower costs, gain agility, and innovate faster.

In this blog, we will explore how customers, who are looking at ways to run IBM software on AWS, can use Red Hat OpenShift Service on AWS (ROSA) to deploy IBM Cloud Pak for Integration (CP4I) with modernized versions of IBM integration products.

As ROSA is a fully managed OpenShift service that is jointly supported by AWS and Red Hat, plus managed by Red Hat site reliability engineers, customers benefit from not having to manage the lifecycle of Red Hat OpenShift Container Platform (OCP) clusters.

This post explains the steps to:

  • Create a ROSA cluster
  • Configure persistent storage
  • Install CP4I and the IBM MQ operator

Cloud Pak for integration architecture

In this blog, we are implementing a highly available ROSA cluster with three Availability Zones (AZ), three master nodes, three infrastructure nodes, and three worker nodes.

Review the AWS documentation for Regions and AZs and the regions where ROSA is available to choose the best region for your deployment.

Figure 1 demonstrates the solution’s architecture.

IBM Cloud Pak for Integration on ROSA architecture

Figure 1. IBM Cloud Pak for Integration on ROSA architecture

In our scenario, we are building a public ROSA cluster, with an internet-facing Classic Load Balancer providing access to Ports 80 and 443. Consider using a ROSA private cluster when you are deploying CP4I in your AWS account.

We are using Amazon Elastic File System (Amazon EFS) and Amazon Elastic Block Store (Amazon EBS) for our cluster’s persistent storage. Review the IBM CP4I documentation for information about supported AWS storage options.

Review AWS prerequisites for ROSA and AWS Security best practices in IAM documentation, before deploying CP4I for production workloads, to protect your AWS account and resources.

Cost

You are responsible for the cost of the AWS services used when deploying CP4I in your AWS account. For cost estimates, see the pricing pages for each AWS service you use.

Prerequisites

Before getting started, review the following prerequisites:

Installation steps

To deploy CP4I on ROSA, complete the following steps:

        1. From the AWS ROSA console, click Enable ROSA to active the service on your AWS account (Figure 2).

          Enable ROSA on your AWS account

          Figure 2. Enable ROSA on your AWS account

        2. Create an AWS Cloud9 environment to run your CP4I installation. We used a t3.small instance type with Amazon Linux 2023.
        3. When it comes up, close the Welcome tab and open a new Terminal tab to install the required packages:
          curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
          unzip awscliv2.zip
          sudo ./aws/install
          wget https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/rosa-linux.tar.gz
          sudo tar -xvzf rosa-linux.tar.gz -C /usr/local/bin/
          
          rosa download oc
          sudo tar -xvzf openshift-client-linux.tar.gz -C /usr/local/bin/
          
          sudo yum -y install jq gettext
        4. Ensure the ELB service-linked role exists in your AWS account:
          aws iam get-role --role-name 
          "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name 
          "elasticloadbalancing.amazonaws.com"
        5. Create an IAM policy named cp4i-installer-permissions with the following permissions:
          {
              {
              "Version": "2012-10-17",
              "Statement": [
                  {
                      "Effect": "Allow",
                      "Action": [
                          "autoscaling:*",
                          "cloudformation:*",
                          "cloudwatch:*",
                          "ec2:*",
                          "elasticfilesystem:*",
                          "elasticloadbalancing:*",
                          "events:*",
                          "iam:*",
                          "kms:*",
                          "logs:*",
                          "route53:*",
                          "s3:*",
                          "servicequotas:GetRequestedServiceQuotaChange",
                          "servicequotas:GetServiceQuota",
                          "servicequotas:ListServices",
                          "servicequotas:ListServiceQuotas",
                          "servicequotas:RequestServiceQuotaIncrease",
                          "sts:*",
                          "support:*",
                          "tag:*"
                      ],
                      "Resource": "*"
                  }
              ]
          }
        6. Create an IAM role:
          • Select AWS service and EC2, then click Next: Permissions.
          • Select the cp4i-installer-permissions policy, and click Next.
          • Name it cp4i-installer, and click Create role.
        7. From your AWS Cloud9 IDE, click the grey circle button on the top right, and select Manage EC2 Instance (Figure 3).

          Manage the AWS Cloud9 EC2 instance

          Figure 3. Manage the AWS Cloud9 EC2 instance

        8. On the Amazon EC2 console, select the AWS Cloud9 instance, then choose Actions / Security / Modify IAM Role.
        9. Choose cp4i-installer from the IAM Role drop down, and click Update IAM role (Figure 4).

          Attach the IAM role to your workspace

          Figure 4. Attach the IAM role to your workspace

        10. Update the IAM settings for your AWS Cloud9 workspace:
          aws cloud9 update-environment --environment-id $C9_PID --managed-credentials-action DISABLE
          rm -vf ${HOME}/.aws/credentials
        11. Configure the following environment variables:
          export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
          TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
          export AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/placement/region)
          export CLUSTER_NAME=cp4irosa
          export ROSA_VERSION=4.14.27
        12. Configure the aws cli default region:
          aws configure set default.region ${AWS_REGION}
        13. Navigate to the Red Hat Hybrid Cloud Console, and copy your OpenShift Cluster Manager API Token.
        14. Use the token and log in to your Red Hat account:
          rosa login --token=<your_openshift_api_token>
        15. Verify that your AWS account satisfies the quotas to deploy your cluster:
          rosa verify quota
        16. When deploying ROSA for the first time, create the account-wide roles:
          rosa create account-roles --mode auto --yes
        17. Create your ROSA cluster:
          rosa create cluster --cluster-name $CLUSTER_NAME --sts \
           --multi-az \
           --region $AWS_REGION \
            --version $ROSA_VERSION \
           --compute-machine-type m6i.4xlarge \
            --replicas 3 \
           --availability-zones ${AWS_REGION}a,${AWS_REGION}b,${AWS_REGION}c \
            --operator-roles-prefix $CLUSTER_NAME \
           --mode auto --yes \
            --watch
        18. Once your cluster is ready, create a cluster-admin user (it takes approximately 5 minutes):
          rosa create admin --cluster=$CLUSTER_NAME
        19. Log in to your cluster using the cluster-admin credentials. You can copy the command from the output of the previous step. For example:
          oc login https://<your_cluster_api_address>:6443 \
            --username cluster-admin \
            --password <your_cluster-admin_password>
        20. Create an IAM policy allowing ROSA to use Amazon EFS:
          cat << EOF > $PWD/efs-policy.json
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Action": [
                  "elasticfilesystem:DescribeAccessPoints",
                  "elasticfilesystem:DescribeFileSystems",
                  "elasticfilesystem:DescribeMountTargets",
                  "elasticfilesystem:TagResource",
                  "ec2:DescribeAvailabilityZones"
                ],
                "Resource": "*"
              },
              {
                "Effect": "Allow",
                "Action": [
                  "elasticfilesystem:CreateAccessPoint"
                ],
                "Resource": "*",
                "Condition": {
                  "StringLike": {
                    "aws:RequestTag/efs.csi.aws.com/cluster": "true"
                  }
                }
              },
              {
                "Effect": "Allow",
                "Action": "elasticfilesystem:DeleteAccessPoint",
                "Resource": "*",
                "Condition": {
                  "StringEquals": {
                    "aws:ResourceTag/efs.csi.aws.com/cluster": "true"
                  }
                }
              }
            ]
          }
          EOF
           
          POLICY=$(aws iam create-policy --policy-name "${CLUSTER_NAME}-rosa-efs-csi" \
             --policy-document file://$PWD/efs-policy.json \
             --query 'Policy.Arn' --output text) || \
             POLICY=$(aws iam list-policies \
             --query 'Policies[?PolicyName==`rosa-efs-csi`].Arn' \
             --output text)
        21. Create an IAM trust policy:
          export OIDC_PROVIDER=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer| sed -e "s/^https:\/\///")
          cat <<EOF > $PWD/TrustPolicy.json
          {
            "Version": "2012-10-17",
            "Statement": [
           {
             "Effect": "Allow",
             "Principal": {
               "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
             },
             "Action": "sts:AssumeRoleWithWebIdentity",
             "Condition": {
               "StringEquals": {
                 "${OIDC_PROVIDER}:sub": [
                   "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator",
                   "system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa"
                 ]
               }
             }
           }
            ]
          }
          EOF
        22. Create an IAM role with the previously created policies:
          ROLE=$(aws iam create-role \
            --role-name "${CLUSTER_NAME}-aws-efs-csi-operator" \
            --assume-role-policy-document file://$PWD/TrustPolicy.json \
            --query "Role.Arn" --output text)
          aws iam attach-role-policy \
            --role-name "${CLUSTER_NAME}-aws-efs-csi-operator" \
            --policy-arn $POLICY
        23. Create an OpenShift secret to store the AWS access keys:
          cat <<EOF | oc apply -f -
          apiVersion: v1
          kind: Secret
          metadata:
            name: aws-efs-cloud-credentials
            namespace: openshift-cluster-csi-drivers
          stringData:
            credentials: |-
              [default]
              role_arn = $ROLE
              web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
          EOF
        24. Install the Amazon EFS CSI driver operator:
          cat <<EOF | oc create -f -
          apiVersion: operators.coreos.com/v1
          kind: OperatorGroup
          metadata:
            generateName: openshift-cluster-csi-drivers-
            namespace: openshift-cluster-csi-drivers
          ---
          apiVersion: operators.coreos.com/v1alpha1
          kind: Subscription
          metadata:
            labels:
              operators.coreos.com/aws-efs-csi-driver-operator.openshift-cluster-csi-drivers: ""
            name: aws-efs-csi-driver-operator
            namespace: openshift-cluster-csi-drivers
          spec:
            channel: stable
            installPlanApproval: Automatic
            name: aws-efs-csi-driver-operator
            source: redhat-operators
            sourceNamespace: openshift-marketplace
          EOF
        25. Track the operator installation:
          watch oc get deployment aws-efs-csi-driver-operator \
           -n openshift-cluster-csi-drivers
        26. Install the AWS EFS CSI driver:
          cat <<EOF | oc apply -f -
          apiVersion: operator.openshift.io/v1
          kind: ClusterCSIDriver
          metadata:
            name: efs.csi.aws.com
          spec:
            managementState: Managed
          EOF
        27. Wait until the CSI driver is running:
          watch oc get daemonset aws-efs-csi-driver-node \
           -n openshift-cluster-csi-drivers
        28. Create a rule allowing inbound NFS traffic from your cluster’s VPC Classless Inter-Domain Routing (CIDR):
          NODE=$(oc get nodes --selector=node-role.kubernetes.io/worker \
            -o jsonpath='{.items[0].metadata.name}')
           
          VPC=$(aws ec2 describe-instances \
            --filters "Name=private-dns-name,Values=$NODE" \
            --query 'Reservations[*].Instances[*].{VpcId:VpcId}' \
            --region $AWS_REGION \
            | jq -r '.[0][0].VpcId')
           
          CIDR=$(aws ec2 describe-vpcs \
            --filters "Name=vpc-id,Values=$VPC" \
            --query 'Vpcs[*].CidrBlock' \
            --region $AWS_REGION \
            | jq -r '.[0]')
           
          SG=$(aws ec2 describe-instances --filters \
            "Name=private-dns-name,Values=$NODE" \
            --query 'Reservations[*].Instances[*].{SecurityGroups:SecurityGroups}' \
            --region $AWS_REGION \
            | jq -r '.[0][0].SecurityGroups[0].GroupId')
           
          aws ec2 authorize-security-group-ingress \
          --group-id $SG \
          --protocol tcp \
          --port 2049 \
          --cidr $CIDR | jq .
        29. Create an Amazon EFS file system:
          EFS=$(aws efs create-file-system --creation-token efs-token-1 \
             --region ${AWS_REGION} \
             --encrypted | jq -r '.FileSystemId')
           
          SUBNETS=$( aws ec2 describe-subnets \
            --filters Name=vpc-id,Values=$VPC Name='tag:kubernetes.io/role/internal-elb',Values='*' \
            --query 'Subnets[*].{SubnetId:SubnetId}' \
            --region $AWS_REGION \
            | jq -r '.[].SubnetId')
           
          for subnet in ${SUBNETS[@]}; do
            aws efs create-mount-target --file-system-id $EFS \
              --subnet-id $subnet --security-groups $SG \
              --region $AWS_REGION
          done
        30. Create an Amazon EFS storage class:
          cat <<EOF | oc apply -f -
          kind: StorageClass
          apiVersion: storage.k8s.io/v1
          metadata:
            name: efs-sc
          provisioner: efs.csi.aws.com
          parameters:
            provisioningMode: efs-ap
            fileSystemId: $EFS
            directoryPerms: "777"
            gidRangeStart: "1000"
            gidRangeEnd: "7000000"
            basePath: "/cp4irosa"
          EOF
        31. Create the following IAM Policy to install the cert-manager Operator for Red Hat OpenShift:
          cat <<EOF > $PWD/cert-manager-policy.json
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Action": "route53:GetChange",
                "Resource": "arn:aws:route53:::change/*"
              },
              {
                "Effect": "Allow",
                "Action": [
                  "route53:ChangeResourceRecordSets",
                  "route53:ListResourceRecordSets"
                ],
                "Resource": "arn:aws:route53:::hostedzone/*"
              },
              {
                "Effect": "Allow",
                "Action": "route53:ListHostedZonesByName",
                "Resource": "*"
              }
            ]
          }
          EOF
           
          CERTMANAGER_POLICY=$(aws iam create-policy --policy-name "${CLUSTER_NAME}-cert-manager-policy" \
             --policy-document file://$PWD/cert-manager-policy.json \
             --query 'Policy.Arn' --output text)
        32. Create a trust policy:
          cat <<EOF > $PWD/CertManagerTrustPolicy.json
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                  "StringEquals": {
                    "${OIDC_PROVIDER}:sub": [
                      "system:serviceaccount:cert-manager:cert-manager"
                    ]
                  }
                }
              }
            ]
          }
          EOF
        33. Create an IAM Role for the cert-manager Operator and attach the permissions policy created on step 31. Take note of the role ARN; you will use it on step 43.
          CERTMANAGER_ROLE=$(aws iam create-role \
            --role-name "${CLUSTER_NAME}-cert-manager-operator" \
            --assume-role-policy-document file://$PWD/CertManagerTrustPolicy.json \
            --query "Role.Arn" --output text)
          
          echo $CERTMANAGER_ROLE
          
          aws iam attach-role-policy \
             --role-name "${CLUSTER_NAME}-cert-manager-operator" \
             --policy-arn $CERTMANAGER_POLICY
        34. Add the IBM catalog sources to OpenShift:
          cat <<EOF | oc apply -f -
          apiVersion: operators.coreos.com/v1alpha1
          kind: CatalogSource
          metadata:
            name: ibm-operator-catalog
            namespace: openshift-marketplace
            annotations:
              olm.catalogImageTemplate: "icr.io/cpopen/ibm-operator-catalog:v{kube_major_version}.{kube_minor_version}"
          spec:
            displayName: IBM Operator Catalog
            publisher: IBM
            sourceType: grpc
            image: icr.io/cpopen/ibm-operator-catalog:latest
            updateStrategy:
              registryPoll:
                interval: 45m 
          EOF
        35. Get the console URL of your ROSA cluster:
          rosa describe cluster --cluster=CLUSTER_NAME | grep Console
        36. Copy your entitlement key from the IBM container software library.
        37. Log in to your ROSA web console, navigate to Workloads > Secrets.
        38. Set the project to openshift-config; locate and click pull-secret (Figure 5).

          Edit the pull-secret entry

          Figure 5. Edit the pull-secret entry

        39. Expand Actions and click Edit Secret.
        40. Scroll to the end of the page, and click Add credentials (Figure 6):
          • Registry server address: cp.icr.io
          • Username field: cp
          • Password: your_ibm_entitlement_key

            Configure your IBM entitlement key secret

            Figure 6. Configure your IBM entitlement key secret

             

        41. Next, navigate to Operators > OperatorHub. On the OperatorHub page, use the search filter to locate the tile for the cert-manager Operator for Red Hat OpenShift.
        42. Choose cert-manager Operator for Red Hat OpenShift and choose Install.
        43. In the role ARN field, enter the ARN of the role create on step 33. Set the Update approval option to Automatic, leave all other values as the default, and choose Install.
        44. Next, install the operators in the following list, on all namespaces, leaving all default options. Figure 7 shows the installation of the IBM Cloud Pak for Integration Operator.
          • IBM Cloud Pak foundational services – select Channel v4.6
          • IBM Cloud Pak for Integration – select Channel v7.2
          • IBM MQM – select Channel v3.1

          Figure 7 shows the installation of the IBM Cloud Pak for Integration Operator.

          Figure 7. Install CP4I operators

          Figure 7. Install CP4I operators

        45. Create the following namespaces for CP4I:
          oc new-project ibm-common-services
          oc new-project integration
        46. Review the IBM documentation to select the appropriate license for your deployment.
        47. Deploy the platform UI:
          cat <<EOF | oc apply -f -
          apiVersion: integration.ibm.com/v1beta1
          kind: PlatformNavigator
          metadata:
            name: integration-quickstart
            labels:
              backup.integration.ibm.com/component: platformnavigator
            namespace: integration
          spec:
            license:
              accept: true
              license: L-VTPK-22YZPK
            replicas: 1
            version: 2023.4.1
          EOF
        48. Track the deployment status, which takes approximately 40 minutes:
          watch oc get platformnavigator -n integration
        49. Create an IBM MQ queue manager instance:
          cat <<EOF | oc apply -f -
          apiVersion: mq.ibm.com/v1beta1
          kind: QueueManager
          metadata:
            name: qmgr-inst01
            namespace: ibm-mq
          spec:
            license:
              accept: true
              license: L-RJON-CD3JKX
              use: NonProduction
            web:
              enabled: true
            template:
              pod:
                containers:
                  - env:
                      - name: MQSNOAUT
                        value: 'yes'
                    name: qmgr
            queueManager:
              resources:
                limits:
                  cpu: 500m
                requests:
                  cpu: 500m
              availability:
                type: SingleInstance
              storage:
                queueManager:
                  type: persistent-claim
                  class: gp3
                  deleteClaim: true
                  size: 2Gi
                defaultClass: gp3
              name: CP4IQMGR
            version: 9.3.0.1-r1
          EOF
        50. Check the status of the queue manager:
          oc describe queuemanager qmgr-inst01 -n ibm-mq

      Validation steps

      Let’s verify our installation!

      1. Run the commands to retrieve the CP4I URL and administrator password:
        oc describe platformnavigator integration-quickstart \
          -n integration | grep "^.*UI Endpoint" | xargs | cut -d ' ' -f3
        oc get secret platform-auth-idp-credentials \
          -n ibm-common-services -o jsonpath='{.data.admin_password}' \
          | base64 -d && echo
      2. Using the information from the previous step, access your CP4I web console.
      3. Select the option to authenticate with the IBM provided credentials (admin only) to login with your admin password.
      4. From the CP4I console, you can manage users and groups allowed to access the platform, install new operators, and view the components that are installed.
      5. Click qmgr-inst01 in the Messaging widget to bring up your IBM MQ setup (Figure 8).

        CP4I console features

        Figure 8. CP4I console features

      6. In the Welcome to IBM MQ panel, click the CP4IQMGR queue manager. This shows the state, resources, and allows you to configure your instances (Figure 9).

        Queue manager details

        Figure 9. Queue manager details

      Congratulations! You have successfully deployed IBM CP4I on Red Hat OpenShift on AWS.

      Post installation

      Review the following topics, when you are installing CP4I on production environments:

      Cleanup

      Connect to your Cloud9 workspace, and run the following steps to delete the CP4I installation, including ROSA. This avoids incurring future charges on your AWS account:

      EFS_EF_ID=$(aws efs describe-file-systems \
        --query 'FileSystems[?Name==`ibm_cp4i_fs`].FileSystemId' \
        --output text)
      MOUNT_TARGETS=$(aws efs describe-mount-targets --file-system-id $EFS_EF_ID --query 'MountTargets[*].MountTargetId' --output text)
      for mt in ${MOUNT_TARGETS[@]}; do
        aws efs delete-mount-target --mount-target-id $mt
      done
      aws efs delete-file-system --file-system-id $EFS_EF_ID
      
      rosa delete cluster -c $CLUSTER_NAME --yes --region $AWS_REGION

      Monitor your cluster uninstallation logs, run:

      rosa logs uninstall -c $CLUSTER_NAME --watch

      Once the cluster is uninstalled, remove the operator-roles and oidc-provider, as informed in the output of the rosa delete command. For example:

      rosa delete operator-roles -c 1vepskr2ms88ki76k870uflun2tjpvfs --mode auto –yes
      rosa delete oidc-provider -c 1vepskr2ms88ki76k870uflun2tjpvfs --mode auto --yes

      Conclusion

      This post explored how to deploy CP4I on AWS ROSA. We also demonstrated how customers can take full advantage of managed OpenShift service, focusing on further modernizing application stacks by using AWS managed services (like ROSA) for their application deployments.

      If you are interested in learning more about ROSA, take part in the AWS ROSA Immersion Workshop.

      Check out the blog on Running IBM MQ on AWS using High-performance Amazon FSx for NetApp ONTAP to learn how to use Amazon FSx for NetApp ONTAP for distributed storage and high availability with IBM MQ.

      For more information and getting started with IBM Cloud Pak deployments, visit the AWS Marketplace for new offerings.

      Further reading

Eduardo Monich Fronza

Eduardo Monich Fronza

Eduardo Monich Fronza is a Partner Solutions Architect at AWS. His experience includes Cloud, solutions architecture, application platforms, containers, workload modernization, and hybrid solutions. In his current role, Eduardo helps AWS partners and customers in their cloud adoption journey.

Brian Olmeda

Brian Olmeda

Brian Olmeda is a Senior Customer Success Manager Architect at IBM. He’s experienced in architecting, designing, and implementing via automation hybrid Cloud platforms. In his current role, he helps IBM customers as a technical expert ensuring their business goals are met with IBM software and services.

Sankar Cherukuri

Sankar Cherukuri

Sankar Cherukuri is a AWS Partner Solutions Architect based in Singapore. His experience includes solution architecture, cloud migration, and modernization. In his current role, he helps AWS partners with technical strategy, enablement and architectural guidance that bring value to the end customers.