Containers

Measuring portability time objective as a metric for migrating to Red Hat Openshift Service on AWS (ROSA)

This post was co-written by Mark Taylor, OpenShift Systems Engineer, IBM Consulting; Ian Packer, Chief AWS Architect, IBM Consulting; and Arnaud Lauer, Partner Solutions Architect, AWS.

Customers are putting greater focus on portability as part of their strategies when adopting cloud computing. Red Hat OpenShift Service on AWS (ROSA) platform provides an agile and flexible platform that enables application portability. It is important to test portability, our consulting partner IBM Consulting refers to this metric as the portability time objective (PTO).

The portability time objective (PTO) is defined as the maximum amount of time that is acceptable to move an application and data. The way applications are designed, the required platform capability and the operational setup are influenced by the PTO. The PTO and service-level KPIs will be measured in seconds, hours, days, weeks, and even months. The speed at which the service can be migrated between hosting platforms depends on application capabilities and functional and nonfunctional requirements.

In this post, we discuss a scenario that describes the need for a PTO of less than a month to address the following requirement: “The business has declared that they want a sustainable solution that has the flexibility to run in multiple locations on multiple cloud providers and therefore needs portability.” Later in this blog post, we highlight other scenarios that would satisfy this requirement.

Our consulting partner, IBM Consulting, delivered a proof of concept to demonstrate the portability of the target containerized components between an on-premises Red Hat OpenShift platform and the Red Hat OpenShift Service on AWS (ROSA). The objective of the proof of concept was to prove that such a solution a) was technically feasible and b) would demonstrate the capability required to deliver the solution.

Overview

A dependency and prerequisite for this proof of concept was the migration of microservices from an existing on-premises virtual machine platform-as-a-service (PaaS) infrastructure to an on-premises Red Hat OpenShift Kubernetes platform. Therefore, it was necessary to stabilize the application on the on-premises Red Hat OpenShift platform in readiness for the migration of the microservices to AWS Cloud.

The ultimate objective was to move the microservices to AWS and prove that they function and operate correctly in Red Hat OpenShift Service on AWS (ROSA).

The proof of concept showed the application functioning and operating correctly on ROSA, and it demonstrated the speed with which the application could be moved. It took approximately 45 minutes to set up the ROSA platform and less than five minutes to deploy 40 microservices from the on-premises environment to the AWS platform.

It was also important to be able to produce evidence for the capability. Companies that operate in regulated industries such as financial services or critical infrastructure programs need to demonstrate to regulators the ability to recover from failures caused by technology, people, and processes. Therefore, such solutions show how they can operate in a multi-cloud deployment to avoid reliance on any single cloud platform.

As part of this proof of concept, the team successfully conducted a demo that showed how microservices and their operational data can be operationally ported and run across two clouds.

Key design decisions

As the focus of the proof of concept was on portability and functionality, the following key design decisions were made:

  • Only one Availability Zone for AWS ROSA would be employed.
  • The proof of concept environment must maintain safe and secure separation from production domains. Therefore, a separate build/config environment was set up in a PoC GitLab while container images could be drawn read-only from production repositories via a VPN to a private subnet in AWS.
  • The deployment of the AWS ROSA was built within a dedicated AWS account to ensure the proof of concept could proceed at an ideal pace without dependency concerns.
  • Testing was kept within the limits of the functionality available in the on-premises Red Hat OpenShift platform to ensure like-for-like functionality.

Architecture

The following diagram shows a high-level architecture of the solution based on the above key design decisions.

diagram shows a high-level architecture of the solution based on the above key design decisions

The following diagram shows the key components of the AWS ROSA cluster and a replica of the bitbucket repository that contained microservices configurations. The main inputs are the Cloud Container Repository that holds the container images, and the GitOps repository that builds out and deploys the microservices in OpenShift. These repositories are the same for both the on-premises cluster and the AWS ROSA cluster, so any configuration changes are consistent.

diagram shows the key components of the AWS ROSA cluster and a replica of the bitbucket repository that contained microservices configurations

Evidence of portability

As with any proof of concept, there needs to be a clear set of objectives and evidence gathered that will define success or failure and, in this case, validate or not the hypothesis around portability that was originally set.

  1. Start the live demo with an empty cluster and execute the steps required to build the microservices within 10–15 minutes.
  2. Infrastructure as code is used as a declarative approach for creating the microservices.
    • Evidence: Git repositories are used as the source of truth for defining the desired application state.
  3. Microservices can be built on the ROSA cluster using OpenShift GitOps as a declarative way to implement continuous deployment.
    • Evidence: Inspection of the cluster resources to verify that they are correctly defined and that the expected pods have started and are in a running state
  4. A test client request can be sent to a microservice, and the microservice responds either with a full end-to-end execution path. This may require stubbing to mimic the connectivity with backend systems or alternatively with a graceful error response.
    • Evidence: Response sent in reply to the client request and inspection of application logs for messages.

Key takeaways

The key observations from completing the proof of concept were:

  • Containerization: using Red Hat OpenShift and AWS ROSA service, the application is portable across cloud service providers and on-premises environments.
  • Infrastructure as code: using a declarative approach for defining the desired application state is an enabler to achieving the portability time objective.
  • OpenShift GitOps: using a declarative approach for continuous deployment enabled secure automation with repeatable outcomes across multi-cloud environments.

The following two screenshots provide a visualization of the dashboard views following the completion of a final demo proving the functionality of the portability of microservices from an on-premises Red Hat OpenShift platform to AWS ROSA.

visualization of the dashboard views following the completion of a final demo

visualization of the dashboard views following the completion of a final demo

Other scenarios

In this post, we described one of the scenarios where PTO is important: “The business had declared that they want a sustainable solution that has the flexibility to run in multiple locations on multiple cloud providers and therefore needs portability.” Of course, there are other scenarios where the concept of portability time objective would matter:

  • Disaster recovery/business continuity: Disaster recovery and business continuity solutions are based on an RTO and RPO that should be aligned to each service’s business need rather than having one set of metrics applied to all services. As applications are modernized to use microservice architectures, then the value of having a portable solution increases. Individual services will have varying RTO/RPO assigned, and having a portable solution that accommodates the variations increases agility and flexibility.
  • Burst capability: The ability to burst into an AWS ROSA environment:
    • The on-premises solution has reached or is near full capacity, and there is a need to increase capacity for an exceptional reason.
    • Capacity on-premises needs to be increased immediately but is impossible due to equipment or resource availability. Services can be immediately expanded into ROSA and then ported back as on-premises capacity comes online.
    • There is a need to run ad-hoc microservices as part of business reporting operations. These ad-hoc services could force on-premises sizing for large worst-case spikes of demand or impact critical continuous services.
  • Dev/test on cloud and production on-premises: Using the AWS ROSA platform to provide rapid deployment of UAT, testing, development, or other nonproduction environments in the cloud saves the cost of procuring hardware that is not needed to run 24×7. This is possible in a near-identical environment to production that aids test quality and staging to production. This approach provides a positive impact on capital availability to run additional projects.

Conclusion

This post covered the concept of portability time objective, an important metric when architecting solutions in a multi-cloud environment, and insights on how this can be realized using OpenShift and AWS ROSA. The exercise showed that microservices running on an OpenShift environment can be moved between platforms in timescales normally associated with DR activities.

  • Port applications in DR timescales
  • Facilitate major event DR
  • Ensure consistent operating environments
  • Enable cloud bursting
  • Open public cloud benefits to critical workloads

For information on how to get started with creating a ROSA cluster, check out the blog post What’s new with Red Hat OpenShift Service on AWS.

Additional resources

Picture of Mark Taylor, IBM

Mark Taylor, OpenShift Systems Engineer, IBM Consulting

Mark Taylor is a Systems Engineer and Red Hat Certified Specialist in OpenShift Application Development. Mark is a time-served technology professional. He manages to learn something new almost every day. Mark has re-invented himself. He can comfortably blend his mainframe experience with the modern hybrid multi-cloud. He works in IBM’s OpenShift practice. Clients tell him that he gets great results and leaves them feeling valued. His grandmother says that he should stand up straighter.

Picture of IBM author Ian Packer

Ian Packer, Chief AWS Architect, IBM Consulting

With over 20 years of IT industry experience, Ian Packer, a Chief AWS Architect at IBM, has worked across a broad range of sectors. Ian’s experience in team leading, delivery, and technical and solution architecture has helped customers solve their technical challenges, whether in data center environments or, more recently, in the public cloud.

Arnaud Lauer

Arnaud Lauer

Arnaud Lauer is a Senior Partner Solutions Architect in the Public Sector team at AWS. He enables partners and customers to understand how best to use AWS technologies to translate business needs into solutions. He brings more than 16 years of experience in delivering and architecting digital transformation projects across a range of industries, including the public sector, energy, and consumer goods. Artificial intelligence and machine learning are some of his passions. Arnaud holds 12 AWS certifications, including the ML Specialty Certification.