
Overview
Drive real-time help desk suggestions to reduce bandwidth required from agents. Identify key customer pain points through help desk chat transcripts and resolution notes. Get early visibility into change indicators and thus allow for targeted and timely intervention. Improve training material by providing common questions and answers. The application of advanced analytical techniques enables visibility into key user intent areas across the lifecycle and generation of actionable insights. To preview our machine learning models, please Continue to Subscribe. To preview our sample Output Data, you will be prompted to add suggested Input Data. Sample Data is representative of the Output Data but does not actually consider the Input Data. Our machine learning models return actual Output Data and are available through a private offer. Please contact info@electrifai.net for subscription service pricing. SKU: INTEN-PS-PCM-AWS-001
Highlights
- Drive real-time help desk suggestions to reduce bandwidth required from agents by identifying key customer pain points through help desk chat transcripts and resolution notes.
- Technical highlights include Unsupervised Models and NLP-based Sentence Parsing applied to the data to extract structured information (customer intent and resolution) from text data on online operational platforms.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p2.16xlarge Inference (Real-Time) Recommended | Model inference on the ml.p2.16xlarge instance type, real-time mode | $0.00 |
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $0.00 |
ml.p2.xlarge Inference (Real-Time) | Model inference on the ml.p2.xlarge instance type, real-time mode | $0.00 |
ml.p3.16xlarge Inference (Real-Time) | Model inference on the ml.p3.16xlarge instance type, real-time mode | $0.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $0.00 |
ml.m5.large Inference (Batch) | Model inference on the ml.m5.large instance type, batch mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Vulnerability CVE-2021-3177 (i.e. https://nvd.nist.gov/vuln/detail/CVE-2021-3177 ) has been resolved in version 1.0.1.
Additional details
Inputs
- Summary
Input: 1 comma separated (csv) file. Reference file: request.csv
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
1 comma separated (csv) file. Reference file: request.csv | Required columns:
case_id: case identifier
subject: case subject
description: case description
chat_body: chat transcripts from the help desk | Type: FreeText | Yes |
Resources
Vendor resources
Support
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products


