AWS Partner Network (APN) Blog
Migrating a Mainframe to AWS in 5 Steps with Astadia
By Craig Marble, Vice President, Legacy Modernization Services at Astadia
If you have a Mainframe, you have invested in building a reliable platform and application portfolio that has served as the backbone of your business. But the technology landscape of today requires more flexibility and agility at a lower cost than Mainframes can provide.
At Astadia, an AWS Partner Network (APN) Standard Consulting Partner, we have found that customers are turning to Amazon Web Services (AWS) as a modern and flexible option for running Mainframe application workloads, and they are leveraging past investments in Mainframe applications and data.
When carefully planned, managed, and executed, the rewards of moving Mainframe workloads to AWS are numerous. Besides the cost savings of the pay-as-you-go model, once your Mainframe application set has been fully deployed on AWS, you will have the freedom to integrate proven business logic with modern technologies for data analytics or mobile enablement, expanding your business to new markets, customers, and partners. With that in mind, migrating Mainframe applications to the cloud seems more like a necessity than a luxury.
In this post, I will walk through a five-step methodology we have found helpful to moving Mainframe applications to AWS.
We recommend you reuse the original application source code and data, and to migrate them to modern AWS services. Mainframe migration enablement tools can keep existing code intact, but you should also expect to replace some components and rethink data storage.
A least-change approach like this reduces project cost and risk compared to manual rewrites or package replacements, and reaps the benefits of integration with new technologies to exploit new markets while leveraging a 20- or 30-year investment. Once migrated, the application will resemble its old self enough for existing staff to maintain its modern incarnation; they have years of valuable knowledge they can use and pass on to new developers.
Step 1: Discover
The first thing you need to do is catalog and analyze all applications, languages, databases, networks, platforms, and processes in your environment. Document the interrelationships between applications and all external integration points. Use as much automated analysis as possible, and feed everything into a central repository.
Astadia employs a combination of commercial analysis tools, like Micro Focus Enterprise Analyzer, and our own specially-developed parsers, to analyze legacy code quickly and efficiently. This analysis output is used to establish migration rules that are fed into Astadia Code Transformation Engine. These rules get updated and refined throughout the project.
Step 2: Design
After analyzing all of the source code, data structures, and end-state requirements, it is time to design and architect the solution. The design should include the following details:
- AWS instance details: For instance types, in most cases, general purpose M instances are suitable for production, pre-production, and performance environments, while general purpose T instances fit the development, test, or integration environments.
- Transaction loads: Non-functional requirements in general, performance requirements such as high transactions per second, or quick response times are often critical for Mainframe workloads execution. This implies careful design and sizing of the underlying network, storage, and computing.
- Batch requirements: Almost every Mainframe runs Batch applications which are typically I/O intensive and require very low latency from storage or data stores. Because this can be a challenge for distributed systems, Batch infrastructure needs to be designed and tested early.
- Programming language conversions and replacements: Some languages which may not be supported or available on the target components can be converted with tools or replaced by newer functions.
- Integration with external systems: Mainframes are commonly the back-end or system of record for satellite or partner systems, and integration must be preserved after migration. This includes protocols, interfaces, latency, bandwidth, and more.
- Third-party software requirements: Each Independent Software Vendor (ISV) may or may not have a functionally equivalent software available on AWS, consequently needing a specific migration path definition.
- Planning for future requirements: Business and IT strategies and priorities dictate architecture decisions, especially around addressing future performance and integration capabilities.
Source code may include MAPPER, LINC, COBOL, or Batch Control Language. Data stores may include networked, hierarchical, relational, or file-based data stores.
The core component of the architecture in Figure 2 is the Mainframe Cloud Framework, which uses a suite of emulators and utilities to execute the legacy code. OpenMCS is Astadia’s Message Control System that provides the necessary transaction processing features of COMS to support migrated code. This Mainframe Cloud Framework runs on Amazon Elastic Compute Cloud (Amazon EC2) for compute resources.
In most cases, Mainframe hierarchical and flat file data structures will be migrated to Relational Database Management Systems (RDBMS) solutions within Amazon Relational Database Service (Amazon RDS). Elasticity of the solution is facilitated by Elastic Load Balancing (ELB) with the Network Load Balancer (NLB) along with Auto Scaling Groups.
You’ll want to select which Mainframe migration tools you want to use; we recommend choosing ones that require you to make the least amount of change since it greatly reduces project costs and risks. For example, Astadia normally uses Micro Focus Visual COBOL for development and Astadia’s OpenMCS for emulating transaction monitors. This combination allows migrating COBOL applications to Windows and Linux with minimum change to the original source.
However, you will need to design custom-developed solutions to meet requirements that aren’t met by emulation tools. COBOL is almost always migrated, but programs written in languages like Algol and MASM will need to be rewritten because they are not supported by the target emulating environment.
Some program functions may be replaced by the target operating system or other target-platform components, so do a little analysis to find the gaps. Some legacy Assembler sort functions, for example, may be replaced by RDBMS SQL clauses. This is also where you will need to define your data migration strategy. You can keep flat files in their same legacy flat form, but it’s best to convert them to relational in order to facilitate integration with modern SQL-based tools, and to facilitate scalability with proven RDBMS. Hierarchical data should be converted to relational data using conversions tools or extract-transform-load (ETL) programs.
Step 3: Modernize
This is an iterative, automated process utilizing Astadia Code Transformation Engine to make mass changes to source code. If the modified code compiles, it’s ready for unit testing. If it doesn’t, developers should review the errors, find a fix, update the migration rules, and run the program(s) through the engine again. Many times, error fixes in one program may be applied en masse to fix the same errors in other programs, giving you the ability to leverage economies of scale.
As you go through the modernization process, with more source code files, the Code Transformation Engine with improved migration rules gets faster and more accurate for migrating follow-on source code. This is because source code files tend to repeat the same coding patterns requiring the same transformation rules. This is also when developers write source to replace those legacy components that will not migrate to AWS.
This step also includes building out and validating the new databases. To make this easier, Astadia has developed a DDL conversion tool that analyzes legacy data file layouts and database schemas, and then generates flat file and relational schemas for the target databases, as well as ETL programs, to migrate the data. Once the target file and database environment has been validated, static data can be migrated in parallel with code migration and development activities.
Dynamic data—data that changes frequently—will be migrated during cutover to production.
Step 4: Test
The good news about testing is that you mostly need to focus on the code that has been changed. You may decide not to unit test every line of code since most of it hasn’t changed, but testing should focus on:
- Integration
- Data accesses
- Sorting routines that may be affected by using ASCII vs. EBCDIC
- Code modifications to accommodate data type changes
- Newly developed code
Any Continuous Integration/Continuous Deployment (CI/CD) pipeline test which executes from a non-mainframe platform (such as from a T27 client platform) can be kept unchanged and follow DevOps best practices.
Because many legacy applications have few, if any, test scripts and documentation, you will likely need to spend time and resources to develop test scripts. We recommend investing the time in developing the proper test procedures to make your applications more robust on AWS. You will also need to perform load and stress tests to ensure your applications are prepared to handle high volumes.
Step 5: Implement
When migrated applications have been tested, verified, and optimized, the process of deploying those applications can begin. In reality, many deployment activities are initiated in parallel with earlier phases—things like creating and configuring AWS instances, installing and configuring Mainframe emulation software (e.g. Astadia OpenMCS), migrating static data, and other infrastructure or framework activities.
In some cases, environments may be replicated to achieve this, or existing environments may be repurposed. Such replications are typically facilitated by automation tools such as AWS CloudFormation or AWS OpsWorks. The specifics of this may depend upon application and data characteristics and any company standards or preferences you might have. After dynamic data is migrated and validated, cutover to production mode can be completed.
Additional Resources for Mainframe Migration
Every Mainframe system is unique with specific languages, subsystems, versions, and data stores. Moreover, every shop has unique functional and non-functional requirements and standards. Astadia can tailor and refine the above steps to your specific needs and leverage our unique proprietary toolset to make your Mainframe migration to AWS successful.
Learn more about Astadia’s unique capabilities and the Mainframe to AWS References Architectures here and here.
For more information on legacy modernization, visit astadia.com/insights.
To help customers begin their mainframe modernization journey, AWS launched a series of webinars covering common mainframe migration patterns and best practices. Visit the Mainframe Modernization & Migration page to hear lessons learned based on real-world customer modernization projects with AWS.