
Overview
Route planning solution identifies the optimal route to transfer freights through a hub and spoke model, with minimum operational cost, maximum capacity utilization of vehicles and minimum servicing time. The solution utilizes state of the art quantum computing simulator for optimization, making it scalable and robust.
Highlights
- The solution helps in efficient resource planning and utilization for transporting freights passing through hub and spoke. It allocates optimal route for different freights in order to reduce the overall transportation operations cost . The solution leverages quantum computing in order to solve the problem in an optimal and faster way.
- This solution can be used for logistics, postal delivery systems, airline transportation, supply chain, ecommerce, freight forwarding and last mile delivery. This quantum computing based optimization is significantly faster than the conventional optimization approach.
- Need customized Quantum Computing solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $40.00 |
ml.m5.xlarge Inference (Real-Time) Recommended | Model inference on the ml.m5.xlarge instance type, real-time mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $40.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $40.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $40.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $40.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $40.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $40.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $40.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $40.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
It is the first version of this solution
Additional details
Inputs
- Summary
Input:
- Supported content type: application/zip
- Sample Input (https://tinyurl.com/y4gkqxjj )
- The input should be a zipped file which contains three csv. Name of all the csv files should match with the name of csv files in sample data which are as follows
- leg_info.csv : This file will contain columns like source-destination, capacity and cost of all the legs available, ensure that names of the column should be same and exactly in the same order as in sample data set.
- resource_data.csv : This file will contain columns like path, package_size & package_id for flow details of each package. package_size will provide capacity of each package, package_id will provide their id number and path will give all the available path number (must be string), ensure that names of the column should be same and exactly in the same order as in sample data set, cloumns of leg details can change according to the available number of legs
- resources_constraint.csv : This file will contain information of all the available paths for each package, see the sample data for format.
Output:
- Supported content type: application/json
- Output file will give result in dictionary format , where each key-value pair will give information of optimum path selected for each package
Invoking endpoint
AWS CLI Command
If you are using real time inferencing, please create the endpoint first and then use the following command to invoke it:
!aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'application/zip' --region us-east-2 output.jsonSubstitute the following parameters:
- "model-name" - name of the inference endpoint where the model is deployed
- file_name - input zip file name
- application/zip - type of the given input file
- output.json - filename where the output results are written to
Resources:
- Input MIME type
- text/csv, text/plain, application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

