
Overview
Topic Modeling solution clusters words/phrases into abstract topics. This solution helps in understanding document content, category and document similarity. Given a set of training documents, this module trains a model and maps the top five relevant abstract topics which have high content coherence in a new document. This solution uses text analysis, natural language processing and topic modeling.
Highlights
- Documents contain clusters of topics which represent distribution of coherent words. This solution provides a list of most relevant topics present in given document using natural language processing and topic modeling. This solution can be customized for customer specific use-case, as they can train the model on their input documents
- Applications of Topic Modeling include document indexing, understanding distribution of data, abstract of the document, document similarity, enterprise content search, Search Engine Optimization (SEO) and Real Time Analysis (RTA) on journals, reports, news, social media posts, customer reviews, emails and surveys.
- Mphasis HyperGraf is an omni-channel customer 360 analytics solution. Need customized Deep Learning/NLP solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $10.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $5.00 |
ml.m5.large Training Recommended | Algorithm training on the ml.m5.large instance type | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $10.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $10.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $10.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $10.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $10.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $10.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $10.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Usage Methodology for the algorithm:
- The zip file should contain ‘.txt’ files.
- Every text file shall be 5 - 10 KB in size.
- Test zip file can contain utmost 1000 text files.
- The Test input zip file should not exceed 2 MB.
- Two separate zip input files are required for training and testing.
General instructions for consuming the service on Sagemaker:
- Access to AWS SageMaker and the model package
- An S3 bucket to specify input/output
- Role for AWS SageMaker to access input/output from S3
Input
Supported content types: application/zip
sample input zip file
SNo-|----------------Text file-------------------------
- Computer Graphics.txt
- Ice Hockey.txt
- Death Penalty / Gulf War- opinion.txt
- How to Become an Astronaut.txt .......... .......
Output
Content type: text/csv
sample output of a document
---Name---|---Key phrases ------------------|----Ice Hockey (Document_Name) Topic1 line,mail,make,model,buy 0 Topic2 drug,line,parent,student,kid 0 Topic3 people,live,man,human,life 12.77% Topic4 game,year,team,play,win 44.51% .......... .......
Invoking endpoint
AWS CLI Command
You can invoke endpoint using AWS CLI:
aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'application/zip' --region us-east-2 output.csvSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- input.zip - input file to do the inference on
- application/zip - Type of input data
- output.csv - filename where the inference results are written to
Resources
Sample Notebook : https://tinyurl.com/tabruyz Sample Input : https://tinyurl.com/t3wurcp Sample Output: https://tinyurl.com/yx4ucok3
- Input MIME type
- text/csv, text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products





