
Overview
This is a Hybrid Quantum Machine Learning solution which detects damaged shipment images. The algorithm runs on a Quantum Computing emulator and is built on cutting-edge quantum mechanics theory of machine learning embedded with classical pretrained deep learning model. The algorithms used in this solution inherits deep quantum circuit layers with trained parameters dedicated for shipment image classification.
Highlights
- Businesses such as logistics, retail, manufacturing, automotive face risks in cargo damage resulting in loss of time, money and unhappy customers. To identify the root cause causing the damage, it is important to continuously monitor shipment images throughout the supply chain. This solution helps users by analyzing images of shipments and predicting if they are damaged or not.
- Quantum Machine Learning is a computational learning methodology and leveraging quantum capabilities enhances the training of input data, thereby resulting in the algorithm learning more complex images. Damaged shipment classifier utilizes the power of classical computing as well quantum computing by constructing a hybrid model to classify damaged shipment images.
- Need customized image analytics solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $40.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $40.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $40.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $40.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $40.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $40.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $40.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $40.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $40.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Bug Fixes and Performance Improvement
Additional details
Inputs
- Summary
Input:
- Supported content type: application/zip
- Input zip folder should not contain more than 50 images.
- Image size should not exceed 300 KB
- 90 percent of the image portion must contain the damaged/ not damaged shipment
- Less noisy images are expected for better results, where noise constitutes human hands, vehicles etc.
- One image must contain only 1 shipment (either damaged or not damaged)
Output:
Instructions for score interpretation:
- Content type: text/csv
- Two columns: 'filename' and 'prediction'
- Column 'filename' contains files' name along with prediction class present in the 'prediction' column in the same row.
- The prediction class '0' and '1' indicate damaged and not damaged shipment images respectively.
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'application/zip' --region us-east-2 output.csvSubstitute the following parameters:
- "model-name" - name of the inference endpoint where the model is deployed
- file_name - input zip file name
- application/zip - type of the given input
- output.csv - filename where the inference results are written to
Resources:
- Input MIME type
- text/csv, text/plain, application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
