
Overview
Quantum simulator based content clustering solution which clusters coherent news articles in one cluster. The simulator runs on quantum annealing algorithm (SQA) to solve optimization problem. Clustering is an unsupervised ML problem in which all the coherent data points are part of same cluster and each data point can be part of only one cluster. We are formulating clustering as a constraint satisfaction optimization problem and solving it using Quantum Annealers.
Highlights
- Documents contain clusters of topics which represent distribution of coherent words. This solution clusters given set of documents based on most relevant topics using NLP and clustering. Quantum annealers reduce the time and space required to solve cluster problems and provides better quality results.
- Application of clustering include document indexing, understanding distribution of data, abstract of the document, document similarity, enterprise content search, Search Engine Optimization (SEO) and Real Time Analysis (RTA) on journals, reports, news, social media posts, customer reviews, emails and surveys.
- Need customized Quantum Computing solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.xlarge Inference (Batch) Recommended | Model inference on the ml.m5.xlarge instance type, batch mode | $40.00 |
ml.t2.large Inference (Real-Time) Recommended | Model inference on the ml.t2.large instance type, real-time mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $40.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $40.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $40.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $40.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $40.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $40.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $40.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $40.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input:
- Supported content type: text/csv
- Inputfile should be a csv file with not more then 500 datapoints.
- File size should not exceed 300 KB
- Csv file should contain a column name- "sentence" which will have all sentences which are to be clustered.
Output:
Instructions for score interpretation:
- Content type: application/json
- Final result is in json format which will contain 3 keys 'r', 'g' , b' which denotes 3 clusters and each key will have all the sentences corresponding to that cluster.
- Currently our quantum simulater detects 3 clusters.
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'text/csv' --region us-east-2 output.jsonSubstitute the following parameters:
- "model-name" - name of the inference endpoint where the model is deployed
- file_name - input zip file name
- text/csv - type of the given input
- output.json - filename where the inference results are written to
Resources:
- Input MIME type
- text/csv, text/plain
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

