
Overview
A high frequency of issues can generate an overwhelming number of service desk tickets and incorrect delegation to teams to handle them. This leads to a spike in MTTR (mean time taken to resolve) and a dip in FCR (First Call Resolution). The solution mitigates these issues by training a multi-factor ML model that considers factors like ticket impact, urgency, priority, issue description and other features to predict the most relevant group to resolve a ticket. A pool of models is run through data to select the most generalizable model for the ticket classification task.
Highlights
- The solution uses NLP to process service desk ticket descriptions stored in free text form to generate ticket specific features and granularizing the information content. This allows for better differentiation between ticket types and their mapping to resolution teams.
- The solution supports customization of input fields by the user to address variability of ticketing information captured by the businesses. The solution allows for optional fields to handle such customization.
- Mphasis Optimize.AI is an AI-centric process analysis and optimization tool that uses AI/ML techniques to mine the event logs to deliver business insights. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $10.00/host/hour |
ml.m5.2xlarge Training Recommended | Algorithm training on the ml.m5.2xlarge instance type | $10.00/host/hour |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $10.00/host/hour |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $10.00/host/hour |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $10.00/host/hour |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $10.00/host/hour |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $10.00/host/hour |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $10.00/host/hour |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $10.00/host/hour |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $10.00/host/hour |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This is the third version of ticket-classifier
Additional details
Inputs
- Summary
Train.csv
- Reported_Day
- prod_cat
- Country
- Detailed_Description
- Priority
- Impact
- Incident_Type
- Reported_Source
Output Target
- Input MIME type
- text/csv, text/plain
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
text/csv | ID Reported_Day prod_cat Country Detailed_Description Priority Impact Incident_Type Reported_Source Target
| Default value: o
Type: FreeText | No |
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.