AWS Cloud Enterprise Strategy Blog

Yes, You Should Modernize Your Mainframe with the Cloud

Many of our large enterprise customers have this worry hanging over their heads … what are they going to do about that mainframe when they migrate? Workloads designed specifically for the cloud tend to focus on horizontal scalability – that is, as they need more and more processing power, they can add compute instances to meet the demand. But many legacy systems were designed for vertical scaling – bigger, more powerful machines when needed – and ultimately some of the enterprise’s most mission-critical workloads found their home on mainframes. And there they succeeded – those mainframes have often been delivering outstanding performance and reliability.

In this blog post, Phil de Valence and Erik Farr explain the many choices available for enterprises interested in migrating their mainframe workloads to the cloud.

Mark


Yes, You Should Modernize Your Mainframe with AWS Using Patterns and Best Practices

By Phil de Valence, Solutions Architect for Mainframe Modernization, Amazon Web Services, and Erik Farr, Solutions Architect Manager for GSI Partners, Amazon Web Services

This post is an update to “Yes, You Can Migrate Your Mainframe to the Cloud,” published by Stephen Orban and Erik Farr in January 2017.

Customers have compelling business reasons to modernize and migrate mainframe workloads to AWS Cloud. However, mainframe modernization projects often require patience, strong leadership, and a robust approach to achieve the intended ROI. Fortunately, based on our experience from successful customer modernization projects to AWS Cloud, we have identified patterns, lessons learned, and best practices that facilitate new mainframe to AWS initiatives.

We have found that there are many reasons customers desire to modernize their mainframe workloads with AWS. First, cost reduction is definitely a strong benefit of moving workloads from mainframes to AWS by eliminating capital expenditure and millions of instructions per second (MIPS), shrinking independent software vendor (ISV) license costs, and leveraging elastic pricing models. Second, customers gain agility with reduced development cycles via continuous integration and continuous delivery (CI/CD), and virtually unlimited infrastructure resources consumed on demand. Third, customers gain the advantage of leveraging the mainframe data, which can contain decades of business transactions and can feed data analytics or machine learning initiatives seeking competitive differentiators. Fourth and finally, modernizing with AWS often resolves the mainframe retirement skill gap and attracts new talent to modernize core business workloads.

There is no one size fits all for mainframe modernization to AWS Cloud. Depending on the business and IT strategy, and depending on the mainframe specific technical constraints, customers select the most suitable pattern for them. If the mainframe is large enough to process multiple workloads, each workload’s characteristics can favor different patterns. Workloads are more easily identified when they are program or data independent. Within one mainframe, a stabilized application can follow one pattern, while an evolving application can pursue another pattern. This ability to choose multiple strategies specific to the workload is how customers are the most successful with the four drivers discussed above—most specifically, the business agility and skills gap.

We now introduce popular and successful patterns implemented by our customers.

Pattern #1: Short-Term Migration with Automated Refactoring

Automated Refactoring automates both the reverse engineering and forward engineering for transforming a legacy stack (such as COBOL-based) to a newer stack (such as Java-based or .Net-based). For efficiency and quality, there is as much automation as possible in this transformation but no manual re-write of code. Typically, the resulting application follows the 12-Factor App best practices similar to cloud-native applications, providing elasticity, horizontal scalability, and easier integration with a wide range of AWS services.

Figure 1:Short-Term Migration with Automated Refactoring

This pattern is not to be confused with language translation tools, which do basic line-by-line code conversion; for example, from procedural COBOL to procedural-like Java (sometimes called JOBOL), which is difficult to maintain and integrate. Automated Refactoring tools take a comprehensive approach by analyzing and transforming the complete legacy stack, including code, data access, data stores, frameworks, transaction subsystems, and dependency calls. It results in the automated creation of a coherent and functionally equivalent target stack, which is service-oriented, service-enabled, and has packaged optimizations for AWS services. It facilitates service decomposition toward the creation of microservices.

The Automated Refactoring tools’ value and differentiators rely mainly on their automated forward-engineering capabilities. Many tools have reverse-engineering capabilities but few have strong and extensive automation for forward engineering.

As an example, a United States Department of Defense customer automatically refactored a complete mainframe COBOL and C-logistics system (millions of lines of code) to Java on AWS, removing the legacy code technical debt. Our APN Blog shows some Automated Refactoring tools with “How to Migrate Mainframe Batch to Cloud Microservices with Blu Age and AWS,” and “High-Performance Mainframe Workloads on AWS with Cloud-Native Heirloom PaaS.”

Pattern #2: Short-Term Migration with Middleware Emulation

This pattern is a re-platform to an emulator that runs on AWS Cloud. With this approach, the legacy application code is moved to the emulator with as few code changes as possible, retaining and requiring the same application maintenance and development skills. This migration is seamless from an end-user perspective, keeping the same interfaces, look, and feel of the application.

Figure 2:Short-Term Migration with Middleware Emulation

Typically, supported source code is recompiled, while unsupported language code is first converted to a supported language and then recompiled. Code changes or refactoring are necessary to integrate with differing third-party utility interfaces or when modernizing the data store and data access along the way. For this pattern, the tools include the emulator, the compiler, as well as all the utilities required to automate the programs and data migration.

This pattern is often seen as an intermediate step within a larger modernization journey, or a target state for stabilized applications. As an example, a multinational beverage company used an emulator on AWS to recreate and migrate the batch mode and online transaction processing capabilities while offering the same mainframe green screen experience. Our APN Blog shows some emulator tools with “Re-Hosting Mainframe Applications to AWS with NTT DATA Services,” and “Migrating a Mainframe to AWS in 5 Steps.”

Pattern #3: Augmentation with Data Analytics

This pattern is not about migrating workloads but about augmenting mainframes with agile data analytics services on AWS. Mainframe data, which can include decades of historical business transactions for massive amounts of users, is a strong business advantage. Therefore, customers use big data analytics to unleash mainframe data’s business value. Compared to mainframe alternatives, using AWS’s big data services customers gain faster analytics capabilities and can create a data lake to mix structured and unstructured data, giving them a much more comprehensive view of the company data assets.

Figure 3:Augmentation with Data Analytics

AWS provides services for the full data life cycle, from ingestion, to processing, storage, analysis, visualization, and automation. Replication tools copy mainframe data in real time, from the mainframe’s relational, hierarchical, or legacy file-based data stores to agile AWS data lakes, data warehouses, or data stores. This real-time data replication keeps the data fresh, allowing for up-to-date analytics and dashboards while keeping the mainframe as the source of record.

As an example, a United States railroad passenger corporation enabled mainframe data using real-time dashboards and reporting for sales, marketing, revenue, and fraud analytics, following the patterns described in this section. Our APN Blog shows a real-time data replication tool with “How to Unleash Mainframe Data with AWS and Attunity Replicate.”

Pattern #4: Augmentation with New Channels

Because mainframe development cycles with legacy languages are slow and rigid, customers use AWS to build new services quickly while accessing real-time mainframe data in local AWS data stores. This is a variation of Pattern #3, where the local mainframe data is not used for analytics but for new communication channels and new functionalities for end users. The new AWS agile functions augment the legacy mainframe applications. Examples of new channels can be mobile or voice-based applications and can be innovations based on microservices or machine learning.

Figure 4: Augmentation with New Channels

This pattern avoids increasing the expensive mainframe MIPS by deploying new channels on AWS. Because data is duplicated, the data architect needs to be careful about potential data consistency or integrity concerns across the mainframe and AWS data stores.

As an example, a large United States commercial bank developed new Lambda-based serverless microservices on AWS, accessing replicated mainframe data in DynamoDB and made these new services available to mobile users via API Gateway. Tools for this pattern are similar to Pattern #3 tools, which perform real-time data replication from legacy data stores or from mainframe messaging systems.

Best Practices

Learning from the experience of past projects, customers, and partners we develop and improve our mainframe to AWS best practices.

  1. Complex proof of concept(POC)—Projects can fail when selected tools are not able to address the most complex technical aspects of a mainframe workload. In order to reduce risks, customers have to request a complex POC evaluating the tool capabilities with the customer-specific most challenging scenarios. It does not mean the POC scope need to be large, but it means the few POC test cases need to be of highest complexity. Depending on the customer mainframe workload, complexity can reside in batch duration, numerous program and data dependencies, complicated logic, uncommon legacy technology or versions, latency requirement, high throughput or transactions per second, or large quantities of code or data. A complex POC validates the tool’s abilities, shows the quality of the tool’s outcome, and reassures both the tool vendor and the customer for a successful collaboration.
  2. Maximum automation—Mainframes typically host millions of lines of code and petabytes of data. Human intervention increases errors, risks, tests, duration, and cost. Consequently, short-term modernization projects use proven software stacks along with as much automation as possible without manual rewriting: automation for applying migration rules; automation for code refactoring; automation for data modernization; automation for tests execution (CI/CD pipeline).
  3. Decide pattern, then tool, then architecture, then activities—The pattern frames the overall approach, with different sets of tools for each pattern. The tool is a critical success factor for mainframe modernization and must be technically tested with a complex POC as early as possible. Once the tool is validated, the overall architecture is created based on the tool and AWS best practices. The technical architecture then drives the modernization implementation activities.
  4. Vendor-neutral pattern selection—There is no one-size-fits-all and no one-tool-fits-all for mainframe modernization. Tool vendors tend to focus on only one pattern. There are also multiple vendors and multiple tools for each pattern. Consequently, pattern selection should be vendor agnostic, driven by the customer’s business and IT strategy priorities, and driven by the customer’s mainframe stack technical constraints.
  5. System integrators selection—Consulting and system integrator services firms can help to various degrees during all phases of the mainframe modernization project to AWS. Some system integrators specialize in only one pattern and one preferred tool, while others cover multiple patterns and multiple tools per pattern. Prior to a modernization tool being selected, consulting professional services should be pattern neutral and vendor neutral in order to advise about patterns and tools based on the customer’s best interest and not on a system integrator specialty with one specific tool. On the other hand, once a tool is selected by the customer system integrator professional services should have expertise in the selected mainframe modernization tool and in AWS. Because of the different skill sets involved (consulting, mainframe, AWS, modernization, tools, integration, tests), it is common to see a combination of teams or professional services companies being engaged.
  6. Modernize legacy data stores—Keeping legacy data stores, such as hierarchical databases or indexed data files, on AWS also keeps legacy technical debt and constraints, such as single points of failure, bottlenecks, or archaic data model, and interfaces. For any pattern, modernizing the data store and the data access layer is typically a smaller investment, providing larger benefits for scalability, availability, agility, operations, skills, cost reduction, data integration, and new functions development.
  7. Workload-based modernization—For large mainframes hosting multiple independent workloads, each workload can follow a different modernization pattern.
  8. Serialize technical then business level modernizations—Tools are typically optimized for quick technical modernizations. Doing business level changes or refactoring requires manual interventions, involvement of business teams, and prevents from performing some functional equivalence tests between the mainframe and AWS. Hence, mixing business and technical modernizations at the same time increases complexity, duration, risks, and costs.
  9. Define tool evaluation factors—For example: legacy technical stack support; complex POC success; IT strategy alignment; project speed (such as the number of lines of code migrated per month); target application stack agility; target data store agility; target code maintainability; migration cost per line of code; target stack license costs; return on investment (ROI) speed.
  10. Estimate modernization and runtime costs—Modernization costs include both the licensing cost for the tools used during modernization, along with professional services costs required for delivering the modernization activities. Furthermore, target architecture recurring runtime costs are key, as they directly impact the ROI speed. Runtime costs include both tool licensing costs (if any) and the AWS service costs. For example, if a customer makes 80% reduction in annual recurring runtime costs, the modernization cost could be recouped after three years, generating significant savings onward for new investments.

Next Steps

Customers modernize mainframes leveraging AWS. Our patterns, best practices, and partners facilitate these initiatives. We are glad to assist with many aspects of mainframe modernization initiatives. To get started, we suggest taking the following actions:

  1. Collect the mainframe technical architecture, along with the business and IT priorities.
  2. Understand possible AWS modernization patterns and decide which is the preferred one.
  3. Identify tools and vendors best supporting the selected pattern and the mainframe characteristics.
  4. Evaluate the tool’s value propositions and confirm the selection with a complex proof of concept.

To learn more about mainframe to AWS capabilities, feel free to reach out to us and review our partner value propositions on the APN Blog mainframe section. We also have similar patterns and best practices for non-mainframe legacy AS/400, or iSeries, or IBM i systems.

 

Mark Schwartz

Mark Schwartz

Mark Schwartz is an Enterprise Strategist at Amazon Web Services and the author of The Art of Business Value and A Seat at the Table: IT Leadership in the Age of Agility. Before joining AWS he was the CIO of US Citizenship and Immigration Service (part of the Department of Homeland Security), CIO of Intrax, and CEO of Auctiva. He has an MBA from Wharton, a BS in Computer Science from Yale, and an MA in Philosophy from Yale.