IBM & Red Hat on AWS
Modernize Mainframe Applications for Hybrid Cloud with IBM and AWS
As digital expectations and AI powered decision making accelerate, organizations are modernizing mainframe applications to move faster, operate‑smarter, and fully realize the value of their core platforms.
On the modernization journey, we are seeing customers choose a hybrid cloud strategy that offers a single integrated operating model, with common agile practices and interoperability of applications between the AWS and IBM mainframes running on premises.
A hybrid strategy that includes IBM Z and AWS allows for rapid AI-lead innovation that makes it easier to access applications and data without significant changes, and optimizes the costs of running or extending applications. Together, this approach maximizes business agility and return on investment (ROI).
Collaborating with IBM, an AWS Premier Tier Services Partner and Managed Cloud Services Provider (MSP), AWS is extending the available application modernization options to enable customers to select the right modernization path for their business.
We have identified five patterns in support of a hybrid cloud approach.
Pattern #1: Real-Time Sharing Between z/OS Applications and AWS
Delivering personalized, differentiated customer experiences increasingly requires real time data exchange between core mainframe applications and AWS. IBM’s acquisition of Confluent, built on Apache Kafka, establishes a foundation for enterprise data in motion, positioning IBM Z as an event driven platform for continuous, real time data flow across hybrid environments. This pattern enables faster, more flexible access to core business data for cloud applications running on AWS—without disrupting mission critical systems or SLAs.
Event Streaming Architecture with Confluent
The pattern leverages Confluent Platform to capture and stream business events in real-time:
- Native z/OS Integration: Confluent Platform runs natively on IBM Z with high throughput processing, low CPU overhead, while delivering low event latency
- Comprehensive Event Capture: Stream events from CICS transactions, IMS message processing, Db2 data changes (CDC), VSAM file updates, and MQ message flows
- Intelligent Event Processing: Use Kafka Streams and ksql DB on IBM Z for real-time event transformation, aggregation, pattern detection, and complex event processing
- Hybrid Cloud Event Mesh: Seamlessly distribute events between IBM Z, Confluent Cloud, and AWS services, enabling event-driven architectures across hybrid environments
AWS Integration Patterns
Events flowing from IBM Z through Confluent can be consumed by AWS services:
- Amazon Managed Service for Apache Flink: Process event streams for real-time analytics
- Amazon EventBridge: Route events to AWS Lambda, SQS, SNS, and other services
- Amazon SageMaker AI: Feed real-time features for ML model inference
- Amazon Simple Storage Service (Amazon S3): Archive event streams for data lake and compliance requirements
- AWS Glue: Transform and catalog event data for analytics workloads
Pattern #2: Synchronizing mainframe data with AWS
As organizations increasingly rely on analytics and AI, there is a growing need to synchronize data between IBM Z applications and AWS. Continuous, granular data synchronization enables cloud-based analytics and AI models to access timely, trusted mainframe data while minimizing duplication, reducing risk, and preserving the performance of core IBM Z applications.
One approach uses IBM Data Gate on Red Hat OpenShift Service on AWS (ROSA) with Amazon Athena for serverless SQL analytics and Amazon Quick Sight for interactive dashboards and rapid insights. Data Gate replicates Db2 for z/OS data into Db2 databases on AWS, then Amazon Athena or Amazon Quick Sight users can query this data via Amazon Athena IBM Db2 connector
IBM Data Gate continuously copies Db2 for z/OS data to the cloud by synchronizing changes with Db2 databases running on Cloud Pak for Data. This enables analytics and machine learning in AWS while keeping IBM Z systems secure and reliable. Data remains sourced on IBM Z, while ML, analytics, and visualization services run on AWS —without additional software on the mainframe, using Db2 for z/OS log-based‑ mechanisms and Red Hat OpenShift on AWS as the standardized data and integration layer.
Pattern #3: Simple and Secure API access to New Channels
This pattern extends access from mainframe applications to the AWS Cloud by making business functions and data available using open APIs for enhancing a customer experience via AWS services. For example, some customers are choosing to extend their customer experience using AI assistants that need secure access to customer information.
IBM z/OS Connect securely exposes z/OS applications and data as modern, API driven services, enabling seamless integration between core mainframe systems and cloud native applications on AWS. IBM z/OS Connect has evolved beyond simple API creation to become an intelligent application integration platform with seamless integration with enterprise API management solutions like Amazon API Gateway
z/OS Connect now supports the Model Context Protocol (MCP), enabling AI agents and large language models (LLMs) to directly interact with mainframe applications and data:
- AI Agent Access: LLMs can discover, understand, and invoke mainframe services through standardized MCP interfaces, enabling natural language interaction with core business systems
- Application Workflow Automation: AI agents can orchestrate complex multi-step business processes spanning z/OS and AWS, making intelligent decisions based on real-time data and business rules
- Intelligent Service Discovery: MCP provides semantic descriptions of mainframe services, allowing AI systems to understand capabilities and automatically compose workflows
- Secure AI Access: Fine-grained access control ensures AI agents operate within defined security boundaries, with full audit trails of all AI-initiated transactions
Pattern #4: Hybrid Storage with AWS Storage Services
Z/OS clients are integrating cloud object storage into their classic disk and tape environments to create a hybrid storage architecture. This hybrid architecture enables clients to leverage the strengths of on-premises disk and tape storage while adding the intrinsic strengths of cloud solutions like AWS Storage Services for backup, archive, and unstructured data.
Z/OS offers many solutions that transparently leverage AWS Storage Services as another tier of storage.
Host Transparent Solutions
A simple way to create a hybrid storage environment is with the IBM TS7700 cloud tier. This feature enables data to be first written to the TS7700 virtual tape cache, which ensures SLAs are maintained, and there are no application changes needed. The TS7700 can then transparently migrate colder virtual tape data to AWS Storage Services for low-cost, cyber-resilient residency.
For customers who desire to utilize AWS Storage Services directly for their tape data, IBM Cloud Tape Connector provides full tape virtualization through software-only emulation that utilizes host-based compression and encryption, enabling direct use of Amazon S3 object storage for mainframe tape data .
IBM z/OS Products that leverage AWS Storage Services
IBM DS8000 provides transparent cloud tiering (TCT), which enables automated, policy-based data backup and archive directly from DS8000 disk directly to both the TS7700 object store and AWS storage, with none of the data passing through z/OS. This solution enables a significant CPU reduction for data management. DFSMShsm and DFSMSdss leverage this capability for unique cyber resiliency and lifecycle management cloud storage solutions for z/OS clients.
z/OS DFSMSdfp CDA (Cloud Data Access) provides a simple way to manage AWS storage credentials and secure authentication when moving z/OS data. It also provides a simple API for the various AWS cloud storage commands. Additionally, no charge utility GDKUTIL is a simple way to move single data sets between disk & tape and AWS cloud storage. It supports S3 capabilities such as object lock, versioning, compression and encryption, for securely transferring and storing data.
In addition to TCT, DFSMShsm and DFSMSdss can also be configured to leverage CDA to connect directly to AWS Storage Services for their resiliency and lifecycle management capabilities. When connected directly to AWS Storage Services, these capabilities do not gain the benefit of the CPU reduction of TCT but do gain the flexibility of leveraging additional functionality within AWS Storage Services.
For moving data directly between z/OS disk & tape and AWS Storage Services at scale, IBM offers DFSMScdm (Cloud Data Manager). CDM uses data set wildcarding as input to move many data sets in parallel to AWS Storage Services. This data can remain in its original format or be converted to any code page. This capability can be used to share data across z/OS sysplexes or across an entire enterprise.
For unstructured data, DFSMSdfp OAM utilizes AWS Storage Services as another tier of storage for data such as PDFs, or audio and video files served on z/OS. OAM can write data directly to AWS Storage Services and transition data from disk and tape to cloud storage.
Enhancing Client z/OS Applications to leverage Storage on AWS Directly
Z/OS application owners are looking to leverage the intrinsic capabilities of AWS Storage Services (immutability, shareability, versioning, object lock, etc.) directly for their application data. In addition to the other methods mentioned in this document, they can leverage CDA as a simple way to manage both authentication and the invocation of the various AWS Storage Services commands directly within their z/OS applications.
Pattern #5: Enterprise Automation Across z/OS and AWS
Automation is a foundational for any successful hybrid cloud architecture, enabling organizations to operate with speed, consistency, and confidence. Automation standardizes deployment, integration, security, and lifecycle management across environments, allowing teams to scale operations, respond faster to business needs, and maintain reliability while reducing cost and complexity.
With IBM’s acquisition of HashiCorp, organizations now have access to a unified infrastructure lifecycle management stack spanning AWS and IBM Z. HashiCorp Terraform handles infrastructure provisioning, defining what infrastructure exists and in what shape, while Red Hat Ansible Automation Platform handles configuration management and Day 2 operations on top of that provisioned infrastructure.
HashiCorp Terraform Enterprise can be deployed within the customer’s AWS environment as a containerized application on Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Kubernetes Service (Amazon EKS), serving as the central infrastructure control plane for both cloud and mainframe resources. It leverages AWS-native services including Amazon S3 for object storage and Amazon RDS for PostgreSQL for its database, and provides workspace management, policy enforcement through HashiCorp Sentinel, and a collaborative UI for plan and apply workflows.
IBM Terraform Self-Managed for Z , extends Terraform to IBM Z infrastructure. The IBM Z Terraform provider enables declarative management of LPARs, FICON connectivity, and OSA networking, integrating with the HMC and HCD through standard Terraform workflows. An included IODF transformer converts existing I/O Definition File content into Terraform configuration, enabling teams to bring validated configurations under Terraform management. The On-Demand Environments provider manages on-demand z/OS development and test environments through IBM Test Accelerator for Z.
Red Hat Ansible Automation Platform can run on AWS and provides the tools needed to implement enterprise-wide automation across AWS and IBM Z. The Red Hat Ansible Certified Content for IBM Z includes collections for z/OS core operations such as job submission and dataset management, CICS and IMS middleware automation, and z/OSMF software update management. Once Terraform provisions infrastructure, Event-Driven Ansible can detect the state change and automatically trigger configuration playbooks to configure security, middleware, and application environments without manual intervention.
Conclusion
To help customers increase agility, maximize the value of their investments, and innovate faster, IBM and AWS are collaborating on hybrid application modernization options to enable customers to select the right modernization path for their business. To learn more:
- Visit the AWS IBM Services Partnership page
- Engage through the IBM Mainframe Application Modernization Solutions
- Visit the Integration architectures between mainframe and AWS for coexistence