
Overview
Text Summarizer solution is an optimal way to tackle the problem of information overload by reducing the size of long documents into a few sentences . Neural-network-based models have the ability to automatically learn the distributed representation for sentences and documents. This summarizer is built using Transfer Learning and Transformer based models which use self attention. The input can have a maximum of 512 words and gives output of 3 sentences (approximately 30 words).
Highlights
- Use of State of the Art Transformer based models that capture context and helps in decision making for classification.
- Extractive summarization model that automatically determines and subsequently concatenates relevant sentences from a document to create its summary preserving its original information content. Underlying model understands the document and distills the important information in approximately 3 lines or 30 words. It can have varied applications in the areas of marketing, content generation, Search Engine Optimization and document management.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need Customized Deep learning and Machine Learning Solutions? Get in Touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.2xlarge Inference (Batch) Recommended | Model inference on the ml.m5.2xlarge instance type, batch mode | $20.00 |
ml.m5.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.2xlarge instance type, real-time mode | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $20.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $20.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $20.00 |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input
Usage Methodology for the algorithm:
- The input has to be a '.txt' file with 'utf-8' encoding. PLEASE NOTE: If your input .txt file is not 'utf-8' encoded, model will not perform as expected
- To make sure that your input file is 'UTF-8' encoded please 'Save As' using Encoding as 'UTF-8'
- The input can have a maximum of 512 words (Sagemaker restriction)
- Input should have atleast 3 sentences (Model limitation)
- Supported content types: text/plain
Output
Content type: text/plain
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
aws sagemaker-runtime invoke-endpoint --endpoint-name "endpoint-name" --body fileb://input.txt --content-type text/plain --accept text/plain result.txtSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- input.txt - input file
- text/plain - MIME type of the given input file (above)
- result.txt - filename where the inference results are written to.
Python
Real-time inference snippet (more detailed example can be found in sample notebook): sample_txt = 'location of input text file' transformer = model.transformer(1, 'ml.m5.xlarge') transformer.transform(sample_txt, content_type="text/plain") transformer.wait() print("Batch Transform output saved to " + transformer.output_path)
Sample Notebook :https://tinyurl.com/yyu32g32 Sample Input : https://tinyurl.com/tx94grp Sample Output: https://tinyurl.com/wnzfy9cÂ
- Input MIME type
- text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out to:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

