This Guidance demonstrates how to build a simple scheduler for batch jobs migrated to AWS Mainframe Modernization using AWS services such as Amazon EventBridge scheduler and AWS Step Functions. Batch processing is an important component of enterprise applications running on mainframe, and as these batch processes are migrated from mainframe to AWS, they require similar integration between batch processing and scheduling functions. This scheduler pattern can be used for both re-platform migrations, with Common Business Oriented Language (COBOL) or PL1 applications, and re-factor migrations, with the COBOL/PL1 code converted to Java.
This Guidance includes two architectures to show how Amazon EventBridge initiates the AWS Step Functions workflow for a single batch job and an orchestration with multiple batch jobs.
Please note: [Disclaimer]
Architecture Diagram
-
Single Run
-
Batch Run
-
Single Run
-
This architecture shows the AWS Step Functions workflow for BatchJobExecution for a single batch job.
Step 1
Amazon EventBridge scheduler invokes AWS Step Functions to implement the Job Poller pattern either as a single execution or as a recurring schedule.Step 2
Call the StartBatchJob API of AWS Mainframe Modernization to start the specific batch job by passing the ApplicationId and BatchJobIdentifier.Step 3
Wait a specified amount of time before checking the batch job execution status.
Step 4
AWS Mainframe Modernization posts the sysout and other batch logs to Amazon CloudWatch.
Step 5
Call API GetBatchJobExecution to check the status of the batch job.
Step 6
Check the returned status of the job. If response shows “Succeeded,” mark the state as Success.
Step 7
If response shows “Failed,” retrieve the batch job logs for users to triage the issue, and mark the state as Fail. Retrieved logs are available in the Output section of Step Functions on the AWS console.Step 8
In case of job failure, send a notification to the user.Step 9
AWS Identity and Access Management (IAM) controls the user and service access -
Batch Run
-
This architecture shows the AWS Step Functions workflow for JobOrchestration for multiple batch jobs.
Step 1
The Step Function BatchJobExecution StateMachine performs the AWS Mainframe Modernization batch job execution using the Job Poller pattern.This includes actions such as StartBatchJob, GetBatchJobExecution, and GetLogEvents. Input parameters to the StateMachine include AWS Mainframe Modernization actions such as ApplicationId and JobNameIdentifier.
Step 2
JobOrchestration StateMachine performs the StartExecution action. Each of the StartExecution actions will execute BatchJobExecution StateMachine. ApplicationId and JobNameIdentifier are set in the input parameters.
Step 3
StartExecution actions can run serially.
Step 4
Use “Parallel state” to define the jobs that can execute concurrently.
Step 5
BatchJobExecution actions are triggered individually or as part of the JobOrchestration flow. Running the actions individually allows you to re-start or re-run any job if needed.
For re-start, copy and modify the job flow to start from the next step as a one-time process once the failed job is re-executed separately.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
EventBridge and Step Functions integrate with CloudWatch alarms. In case of an AWS Mainframe Modernization batch job failure or any other type of failure, Amazon Simple Notification Service (Amazon SNS) can send alerts to the user.
-
Security
IAM policies manage user access, providing minimum permissions for users to create new schedules or modify existing schedules. The IAM execution role controls service access for services that invoke a batch job on AWS Mainframe Modernization.
-
Reliability
Step Functions comes with retry and catch mechanisms, which allow you to implement automated retries of a particular batch job.
-
Performance Efficiency
Scheduling batch job flows is mostly pre-defined. This Guidance is built on serverless technologies such as EventBridge and Step Functions, so there is no need to provision new hardware or software to set up additional jobs. You can schedule new jobs can by defining additional Step Functions and setting up triggers from EventBridge.
-
Cost Optimization
Step Functions cost is based on the number of state transitions only. Because Step Functions is serverless, costs will be based on usage instead scheduler instances that are continuously running. There is no separate cost for hardware or software. Additionally, AWS Free Tier offers 4,000 Step Functions state transitions.
-
Sustainability
EventBridge scheduler invokes Step Functions only when it is scheduled to run. Further, the underlying hardware and software component are only provisioned when Step Functions is invoked. This helps you keep resource utilization to a minimum.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.