
Overview
The MTTR (Mean Time to Resolution) predictor is an AI/ML based solution which predicts the time taken by a service agent to solve a specific ticket or an incident request. The solution learns the efficiency, experience and workload management metrics for various ticket types solved by service agents to arrive at the predictions. The solution helps business in optimal ticket allocation leading to a low MTTR, shorter wait time, fewer open incidents leading to improved efficiency and SLA (Service Level Agreement) adherence.
Highlights
- The solution uses a multi-factor approach and considers factors such as efficiency, experience, workload management for various ticket types for all incident managers etc. to predict MTTR for an incident.
- The solution incorporates learning from ticket management data from systems and predicts real time MTTR for an incident leading to improved ticket management metrics like fewer open incidents, shorter processing and wait times and improved SLA adherence.
- Mphasis Optimize.AI is an AI-centric process analysis and optimization tool that uses AI/ML techniques to mine the event logs to deliver business insights. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $5.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $5.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $5.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $5.00 |
ml.m5.12xlarge Inference (Batch) | Model inference on the ml.m5.12xlarge instance type, batch mode | $5.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $5.00 |
ml.p2.16xlarge Inference (Batch) | Model inference on the ml.p2.16xlarge instance type, batch mode | $5.00 |
ml.c4.4xlarge Inference (Batch) | Model inference on the ml.c4.4xlarge instance type, batch mode | $5.00 |
ml.m5.xlarge Inference (Batch) | Model inference on the ml.m5.xlarge instance type, batch mode | $5.00 |
ml.c5.9xlarge Inference (Batch) | Model inference on the ml.c5.9xlarge instance type, batch mode | $5.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This is the third version of the algorithm
Additional details
Inputs
- Summary
- Request ID
- Request Resolved By
- Request Submitted Date and Time
- Request Priority
- Request Resolved Date and Time
- Request Category
- Request Status
- Limitations for input type
- No
- Input MIME type
- text/csv, application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




