
Overview
Natural Language Question Generator can be used to generate questions from free text content in scenarios such as educational content, conversational systems like chatbots, virtual assistants, FAQ creation etc. This solution leverages attention based models to generate appropriate questions from given paragraphs. Deep Neural Network based transformer model have been trained to create this question generator. This solution can generate coherent and intelligent questions based on the most important aspects of the paragraph.
Highlights
- This solution is an open domain question generator. It uses of state of the art transformer based models that capture context and frame relevant questions from a given text content.
- The solution can be leveraged in industries such as EdTech, health care, banking, insurance, retail, e-commerce to power systems like intelligent chatbots, virtual assistants, FAQ generation, knowledge games etc.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized Machine Learning and Deep Learning solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.12xlarge Inference (Batch) Recommended | Model inference on the ml.m5.12xlarge instance type, batch mode | $20.00 |
ml.m5.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.12xlarge instance type, real-time mode | $10.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $20.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $20.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $20.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $20.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $20.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $20.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $20.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $20.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input
• Supported content types: text/plain • Sample input file: (https://tinyurl.com/yyv7uapx ) • Input file should be of .txt type and with 'ascii' encoding • Input file should contain paragraph for which question needs to be generated • Input file size should be less than 2 kb • It is recomended to use high configuration systems for bigger paragraphs
Output
• Content type: text/plain • Sample output file:(https://tinyurl.com/y3szyyps ) • Output file will be of .txt type • Output file will contain questions generated from the input paragraph
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'text/plain' --region us-east-2 result.txtSubstitute the following parameters:
- "model-name" - name of the inference endpoint where the model is deployed
- file_name - input file name
- text/plain - type of the given input
- result.txt - filename where the inference results are written to.
Resources
- Input MIME type
- text/plain, text/csv
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products


