
Overview
This solution solves crew rostering problem due to flight time variability or flight delays. It minimizes the changes in the existing crew schedule to provide the optimum revised schedule for each crew member
Highlights
- This is a heuristics based method for crew rostering (reschedule of crew duties) that tries to accomodate flight delays and deviations from existing schedule. The Solution rapidly and optimally modifies the schedule provided by the user based on constraints of crew scheduling. It minimizes number of crew swaps while considering constraints such as number of crew on each flight, minimum and maximum flying hours as defined in the provided input.
- This solution is primarily focused on Airlines but can be repurposed for other use cases like trucking, railroads etc. It can help companies improve the utilization of crew, reduce operations cost and improve employee satisfaction.
- Mphasis Optimize.AI is an AI-centric process analysis and optimization tool that uses AI/ML techniques to mine the event logs to deliver business insights. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $20.00 |
ml.m5.xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.xlarge instance type, real-time mode | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $20.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $20.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Enhancement
Additional details
Inputs
- Summary
Input zip file consists of- BASE1,BASE2,BASE3 - Crew schedule corresponding to each crew base. In which crew number is index and flight no is column name. delay_df - Delay data frame which consists expected timing of delayed flight and their base df - Which consists actual start and end time of flight, their base, and crew pair id. input parameter - user defined parameter of rest time for crew pair
- Input MIME type
- text/plain, application/zip
Resources
Vendor resources
Support
Vendor support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

