Migration & Modernization
Mainframe Replatform: Best Practices with Rocket Software on AWS
Legacy mainframe applications are the backbone of some critical business operations across enterprises, yet these systems present significant modernization challenges. While proven reliable over decades, mainframes accumulate substantial technical debt, demand increasingly scarce specialized expertise, and create barriers to rapid innovation and feature deployment.AWS Mainframe Modernization with Rocket Software offers organizations a strategic solution to protect customer’s coveted intellectual property of these legacy applications in the original source code. This solution enables companies to reduce operational costs, accelerate development cycles, and harness modern development tools and methodologies while preserving valuable existing business logic and maintaining operational continuity. By modernizing with AWS, enterprises can transform their legacy infrastructure into agile, cloud-native systems that support future growth and innovation.This guide focuses on implementation of replatform with Rocket Software on AWS based on your business objectives, risk assessment, and ROI timelines. We’ll cover seven specific best practices for successfully planning, migrating, and validating your workloads from start to finish. This blog post provides best practices including standardized environments, Field-Developed Solutions (FDS) identification, code compatibility fixes, data migration strategies, and comprehensive testing procedures on AWS.
Best Practice 1: Define Target State Architecture
The System Analysis and Design Document (SA&DD) is a critical project deliverable used to manage scope creep, reduce testing effort, and improve speed to market. The SA&DD defines the comprehensive target solution by mapping:
- Hardware specifications and requirements
- System services and configurations
- Programming languages and development tools
- Third-party applications and integrations (examples include):
- Database management systems (both self-managed and Amazon RDS: DB2 LUW, PostgreSQL)
- Job scheduling tools
- Report generation software
- File transfer utilities (SFTP, FTP)
- Email systems (SMTP)
- Development and enterprise server configurations
- Data migration strategy and plans
- Compliance and regulatory requirements
- Security controls and monitoring approach
- Source code inventory and availability assessment
- Integration points with external systems
- Performance requirements and Service Level Agreements (SLAs)
The SA&DD serves as the blueprint for the migration project, so all stakeholders have a clear understanding of the target architecture and technical requirements. It helps identify potential risks and technical gaps early in the project lifecycle, allowing for proactive mitigation strategies.As part of producing the SA&DD, run a complete static analysis and inventory all components. Validate and document the compiler and directive compatibility and document any risks. Assign a dedicated team to address findings and apply fixes systematically across the codebase. These steps ensure code standard compliance, reduce maintenance needs, and prevent downstream deployment issues.
Best Practice 2: Build a standardized development and enterprise server environment
The goal of a standardized development environment is to provide a consistent template for all developers. The standardized environment defines both the directory structure and project properties, including compiler directives, so all developers work in identical conditions and code is compiled consistently.
Standardized setup for Rocket Enterprise Developer (ED) and Enterprise Server (ES)
Create a version-controlled golden setup for Rocket Enterprise Developer (ED) and Enterprise Server (ES) on AWS. Specify versions and build configurations in Amazon EC2 images.
Keep the compiler directives (as examples. DIALECT/ARITH/TRUNC, code pages), link options, and build steps in source control. This provides consistent toolchain settings while CI detects configuration drift.
Create a master ES region
Create a master ES region with Rocket Directory Server and Enterprise Server Common Web Administration as Infrastructure as Code, defining all CICS/IMS/JES resources, datasets, security, and middleware. Clone this template to QA/UAT environments, using AWS Systems Manager and Secrets Manager for environment-specific configurations. Implement Network Load Balancer for traffic management. Use Amazon CloudWatch with AWS Systems Manager for maintenance to ensure consistent environments and simplified updates.
Best Practice 3: Identify requirements for Field Developed Solutions (FDS)
FDS (Field Developed Solutions) serve as bridge solutions that connect Rocket Software Enterprise Server with third-party products by addressing technology gaps between them. During analysis, teams identify which specific FDS are needed to enable proper integration between these platforms. Once identified, these FDS are purchased, installed, and thoroughly tested in the target environment to ensure proper functionality. In special situations where standard FDS don’t exist, such as for custom printer solutions, Rocket Software architects may collaborate directly with the delivery team to explore and evaluate options for developing new, custom FDS that meet the unique requirements.
Mechanisms to Identify FDS during discovery phase
Inventory and Discovery (use the standardized Enterprise Analyzer (EA) reports)
- Start with Executive Reports such as Application Inventory and Application Summary for a portfolio view of program volume, technologies used, and complexity. Unusual technologies or outlier metrics can indicate homegrown components.
- Review Inventory Reports to compare verified file counts and lines of code (LOC) across languages like COBOL, PL/I, and Job Control Language (JCL). Discrepancies between what JCL references and what EA can verify often point to custom utilities or copybooks outside the baseline.
- Use Verification and Reference Reports, especially Missing Files and Unresolved References, to list copybooks, called programs, and JCL procedures (PROCs) that do not resolve. Treat frequently referenced unresolved items as the highest priority.
- Enable JCL Support to parse procedures, control cards, and utility aliases. Scan the reports for non-standard utility names, custom procedure naming patterns, and dataset identifiers that signal in-house jobs or libraries.
- Run the Portability Assessment in EA to flag findings and points of interest such as dialect-specific features or non-standard constructs. These findings can correlate with custom routines that require special handling and are strong indicators of FDS.
Pattern-Based Search using Enterprise Analyzer Code Search
- Use Enterprise Analyzer Code Search to scan all verified source files for patterns that may require an FDS. Start with predefined categories and then add custom queries.
- Run a Code Search from the Analyzer workspace or create a Custom Code Search Report that bundles multiple queries for repeatable discovery across portfolios. Check “Application Assessment (Rocket) – Code Search” for additional details. For example determine what SORT parameters are in use. This becomes useful if, for example, you have IFTHEN or JOINKEYS in your SORT parameters.
Stakeholder Input
- Interview developers and operations teams for in-house tools.
- Enable on Rocket Enterprise Developer tooling and updates that supplement in-house tools.
- Check change management history for custom utility references.
Examples of common FDS:
- SORT: Provides extensions to the Rocket software external SORT to support additional DFSORT and SYNCSORT options.
- Simple Mail Transfer Protocol (SMTP): Provides SMTP integration with JCL to provide email sending functionality.
- Secure FTP (SFTP): Provides the necessary support to map mainframe JCL FTP steps to SFTP with little or no change to the JCL.
- Rocket software offers several FDS that integrate third party languages such as SAS. Please review the full list of all FDS’s from Rocket software.
Best Practice 4: Prepare Source Code for Migration Success
Code deployment is the process of taking the application source code from the mainframe and building, deploying and testing in the target Rocket Software runtime environment. This will include recompilation of the COBOL code and any code conversion for unsupported languages in Enterprise Server such as assembler. Check for source code completenessBefore recompilation, validate complete source code availability including application modules, copybooks, JCL/PROCs, macros, and utilities.
Embedded hexadecimal (hex) in source
Teams use hex literals to set binary flags and bit masks (COMP, COMP-5, BINARY), initialize packed/COMP-3 fields, inject non-display control characters for record separation, and represent code page–specific display values. Migration challenges arise from code-page differences between mainframe (EBCDIC) and the target environment (ASCII). For example, the digit 0 is X’F0′ in EBCDIC but X’30’ in ASCII, so a COBOL statement MOVE X’F0F1′ TO ALPHA-FIELD displays 01 on the mainframe but requires translation for correct ASCII display. Similarly, the EBCDIC record separator X’15’ differs from the ASCII line feed X’0A’. Document the encoding standard in the SA&DD for all environments.Solution
- Decide and document the target encoding strategy. If you retain EBCDIC, keep datasets and literals in EBCDIC and set file handlers or CCSIDs explicitly. If you adopt ASCII or Unicode, translate only at well-defined I/O boundaries and rewrite hex that represents display text into portable literals (for example, 01) or encoding-aware constants in shared copybooks.
- Restrict hex to true binary and packed data (flags, bit masks, COMP-3), and document the intent in code comments and design artifacts. Use Q-CLI to identify likely binary fields, generate or update unit tests around them, and validate that numeric behavior on the Rocket Enterprise Server runtime matches the mainframe baseline
- Automate hex discovery and classification. Use repository-wide searches driven by Q-CLI to locate patterns such as X’..’, then have the agent group occurrences by usage pattern (display, packed, control characters) and propose either direct translation (for text) or preservation (for binary). Use a small golden input file to prove the translation path end to end, confirming that display fields are translated exactly once while binary fields remain byte-identical across platforms.
- Test translation paths as repeatable workflows. Validate dataset and code-page settings, copy utilities, and middleware to ensure they neither double-translate nor skip translation. Q-CLI agents can orchestrate these checks by invoking your existing utilities and comparison tools from the command line, capturing logs and diffs, and packaging them into repeatable recipes that can be rerun during regression and cutover rehearsals.
Less stringent coding standards in the source code
- Legacy applications often depend on permissive mainframe-compiler behavior (or shop-specific extensions) that a stricter Rocket compiler will reject. Common compilation issues include missing scope terminators, fixed-format mistakes, inconsistent sign handling, non-standard verbs, MOVE CORRESPONDING, unresolved COPY REPLACING, and TRUNC/ARITH differences.
- To mitigate these issues, establish and enforce a compiler-directive profile that emulates mainframe behavior. Add linter and Continuous Integration (CI) rules for scope, format, and PIC checks. Replace non-standard constructs with portable equivalents. Implement unit tests for numeric and packed fields to detect regressions early.
Evolution of COBOL standards
- COBOL language standards have evolved over time continually adding new syntax and reserved words. For example, ‘FUNCTION’ became a reserved word with ANSI85’s introduction of intrinsic functions. In some cases, these differences can be mitigated by adjusting the compile directives. Rocket software’s “REMOVE” directive allows reserved words to be used as data items – for instance, “REMOVE(FUNCTION)” enables compilation while preserving expected functionality.
Accelerating activities with Amazon Q Developer on the command Line (Q CLI)
- Leverage Amazon Q-CLI to accelerate mainframe modernization by orchestrating automated code retrieval, conversion, and deployment workflows directly from the command line. Q-CLI streamlines recompilation tasks, surfaces incompatibility (like unsupported assembler calls) early, and provides consistent targeting of Rocket Software runtimes, ensuring each migration sprint begins with ready-to-deploy source code. See this article on Accelerating compilation with Q CLI for additional information.
Best Practice 5: Extensive testing is key to success
Develop an AWS environment that mirrors the production environment by replicating exact topology, security configurations, data encodings, and middleware components while executing comprehensive integration and system tests.
Establish a mainframe baseline:
Establish a comprehensive baseline measurement of the current system’s behavior and performance metrics, encompassing both steady-state and peak conditions. Document precise measurements of throughput, latency, and data integrity checkpoints across all components. This baseline should include detailed metrics from mainframe systems, specifically capturing System Management Facilities (SMF) records, control totals, critical-path job timings, I/O volumes, and interface latencies. This information will serve as the definitive reference point for validating AWS migration success.For baseline testing implement end-to-end validation covering the complete operational spectrum. Execute the full batch schedule including all Job Control Language (JCL) procedures, test all online screens and services, verify both inbound and outbound interfaces, validate third-party component integration (including printing services, email systems, message queues, and schedulers), confirm Enterprise Server (ES) security controls, and perform comprehensive data pulls to establish timing baselines. Document all results with statistical significance, including peak load conditions and edge cases, to ensure a complete performance profile for comparison with the AWS implementation. Use automated testing frameworks to ensure consistency and repeatability of measurements and maintain detailed logs of all test conditions and environmental factors that could impact performance metrics.
Integration testing
Execute tests in a shared integration Enterprise Server (ES) region or local developer ES environment. Ensure precise replication of production configurations including encoding selections (EBCDIC/ASCII), dataset catalogs, and connection settings. When addressing defects, implement corrections through standardized COBOL compiler-directive profiles where possible. This ensures subsequent builds maintain consistent behavior across critical elements like numeric semantics and sign rules. Other key elements to standardize include CCSID expectations, file organization, and record-delimiter handling. This standardized approach prevents environment-specific issues and ensures reproducible behavior across all environments.
Examples of typical challenges
- Data Format: When a mainframe sends data to or from another mainframe it can send a Variable Block (VB) file directly. When a mainframe sends a VB file to a distributed platform, it will not arrive in a VB form that Rocket software can process. This requires a Rocket Field-Developed Solution (FDS) for record-format conversion or a custom Extract, Transform, Load (ETL) program.
- Character-set changes: When a distributed platform sends data to a mainframe the character set can be automatically converted from ASCII to EBCDIC by the FTP server on the mainframe. When that same file is sent to the ES environment this character set conversion capability is not present. Rocket FTP FDS handles outbound data transfers with built-in character set conversion capabilities. However, inbound transfers from external platforms require additional processing steps for character set conversion.
Functional equivalence testing
System testing must validate functional equivalence at scale after integration testing is complete. Execute the as-is batch portfolio on AWS using production-level data volumes. Compare all outputs against baseline metrics until achieving functional equivalence and meeting SLAs. Create detailed planning documents that specify calendars, dependencies, and time-zone rules. Include restart procedures, rerun protocols, and documented cutover steps. Running as-is tests helps isolate environment differences, encoding issues, and I/O variations before optimization begins.Address scheduler migration early, if possible, use legacy scheduler bridges or plugins to orchestrate AWS workloads during initial testing. Consider earlier migration if the legacy platform lacks secure connectivity, calendar features, or restart capabilities. Validate scheduler parity through shadow or parallel runs. Practice monitoring, alerting, and recovery procedures before cutover. Once operations stabilize, streamline job flows to remove redundancies while keeping functional changes separate from platform migration.
Performance testing
Performance testing must validate that migrated applications meet or exceed the source system’s baseline metrics. Focus testing on three key areas: throughput rates, response latency, and resource usage. Execute all tests using production-scale data volumes and workload patterns to ensure accurate performance comparisons.
Baseline Metrics:
- Establish performance baselines using mainframe SMF/RMF records and job accounting metrics
- Document critical-path timings and I/O volumes as reference benchmarks
- Capture peak workload performance characteristics and transaction response times
Storage Performance:
- Configure Amazon EFS throughput modes (Elastic/Provisioned) based on workload patterns
- Utilize Amazon EBS io2/io1 volumes for I/O-intensive batch processing
- Monitor EFS metrics: TotalIOBytes, PermittedThroughput, BurstCreditBalance
- Track EBS metrics: IOPS, Throughput, Queue Length, Volume Status
- For more detailed information check Selecting Filesystems for Mainframe Modernization
Compute Performance:
- Monitor and capture EC2 metrics: CPU Utilization, Network I/O, Memory Usage
- Optimize instance types based on workload characteristics (compute/memory/IO intensive)
- Track EBS-optimized instance performance for storage-intensive operations
- For more information check Scaling for performance with AWS Mainframe Modernization
Validation Process:
- Execute comprehensive batch schedule testing with full JCL procedures
- Monitor and validate performance against established SLAs
- Track job completion times, resource utilization, and throughput rates
- Implement automated performance regression testing for consistent evaluation
Subject Matter Expert (SME) sign-off
SMEs should validate critical outputs and exception paths; in addition, automation should supply reconciliation reports, timings, and diffs so sign-off is fast, transparent, and auditable.
Best Practice 6: Leverage common tools to accelerate migration
Mainframe access tool (MFA)
- MFA, from Rocket Software provides both a user-friendly drag-and-drop interface and a batch processing system that enables users to pull and push multiple data sets between the mainframe and other systems. This eliminates the need to run FTP jobs to push files from the mainframe to the target platform. For example, MFA can be used to access Partitioned Data Set (PDS) libraries, JCL, control cards to obtain the inventory. In addition, it can be used in downloading datasets for testing.
- Since MFA operates bidirectionally, it enables test output files to be transmitted back to the mainframe, to be compared against mainframe-generated output. Project teams can continue using familiar mainframe COMPARE/X utilities for immediate validation needs while gradually learning new tools in the target environment.
Data File Converter (DFCONV)
- DFCONV, a batch data-file converter from Rocket Software is used to transform files between different formats, including conversion between EBCDIC and ASCII. This prepares migrated datasets for use in ES. The workflow uses MFA’s batch or command-line interface to download source datasets from the mainframe, then converts to the required format using DFCONV.
Integrated Development Environment (IDE)
- Eclipse provides the primary platform for development and testing, particularly for Linux projects. Developers may also choose Microsoft Visual Studio or VS Code if that is the corporate standard. Both Enterprise Developer and Visual COBOL integrate seamlessly with Eclipse and VS Code. These developer tools support editing, compiling, and debugging across Windows and Linux environments.
Best Practice 7: Develop a comprehensive data migration strategy
Identify integration & system-test data (make it testable, repeatable)
All the data required to support integration and end-to-end system testing needs to be identified as early as possible. This will often end up in a repeatable process that is executed several times to resync the system test environment with the mainframe to make test comparisons easier.
Decide EBCDIC vs ASCII early (and stick to it)
The choice between EBCDIC and ASCII impacts data conversion processes, application compatibility, and potential performance overhead. While maintaining EBCDIC can reduce migration complexity and preserve exact data representation, adopting ASCII can facilitate easier integration with modern platforms and tools. Organizations must carefully weigh these factors, considering their specific application requirements, data characteristics, and long-term modernization goals when making this decision.
Data replication plan & cutover timeline (what “windows” mean)
This is a moving target and is highly dependent on when the cut-over occurs. Once the data has been identified an estimate on the time needed to migrate the data needs to be established. This may result in a time that exceeds the acceptable downtime available for the cutover. If so, determine static vs. dynamic data based on a start date of the data cut-over and the actual go-live date. Static data can be migrated in the days or weeks leading up to a go-live with no risk of it changing on the mainframe. The dynamic data is the remainder that will have to be migrated on the go-live date. Perform at least two-timed end-to-end drills and adjust the plan until you have 20–30% buffer versus the window.
Historical data storage & access (pre vs post—how to choose)
Many customers have regulatory or legal requirements to maintain many years of historical data. This historical data needs to be migrated to the target environment. If the volume of historical data is large, consider moving the historical data either before or after the go live.
Physical file migration (formats, handlers, tooling)—what you should learn
Files containing non-text data (files with COMP, COMP-3 fields for example) will require what is known as a structure-file. This defines how to interpret and display the data in a data file and is created using the structure file editor in from Rocket. Structure files are built based on copybooks and a compiled program but have added complexities for files that contain multiple layouts in the same physical file where the structures will be conditional. The structures are used during the EBCDIC to ASCII conversion.
Database migration (a strategic choice—minimize risk to cutover)
When selecting a target database engine for migration, carefully evaluate options based on parity needs, cost considerations, team skills, and long-term flexibility needs. It’s essential to thoroughly document feature mappings, data-type conversions, code pages, and develop validation queries to ensure migration integrity.DB2 LUW (Linux, Unix, Windows) provides behavior closest to mainframe DB2, requiring fewer code changes and reducing migration risk, though it comes with vendor dependency and reduced flexibility. Alternatively, PostgreSQL offers significant advantages including zero license fees, strong SQL standards compliance, modern features like JSON support, and excellent cloud flexibility. However, it requires mapping DB2-specific features, potential team upskilling, and application modifications where behaviors differ. Other viable migration paths include Microsoft SQL Server or Oracle with HCOSS for DB2 schema/data migration and mainframe behavior emulation. Check the rocket software for all other available database options.
Validation and governance
Effective validation and governance require control checks throughout the migration process, including encoding-aware file comparisons, row and control-total reconciliation, targeted business queries, and careful analysis of timing deltas. Organizations should store comprehensive manifests, checksums, and detailed reports while requiring multiple migration rehearsals before implementation. Most importantly, establish clear criteria that will block go-live if any validation gates remain unmet, ensuring data integrity and system reliability.
Conclusion
Go replatform. The journey of mainframe replatforming to AWS represents a strategic evolution rather than just a technical migration. By following these seven best practices systematically, organizations can successfully transform their mainframe infrastructure while minimizing risks and ensuring business continuity.