
Overview
Capacitated Vehicle Routing Optimizer (CVRO) is a dispatch automation and route optimization solution built to reduce the cost of operations for last mile delivery. It is the final leg of a journey a package undertakes via source station to the destination. Owing to high fuel spends, last mile delivery is a major cost center for logistics companies. Reducing the overall distance travelled by trucks can help improve the profitability of an organization. This solution makes use of truck capacity-based package clustering and Simulated Quantum Annealing to solve this problem. In comparison to classical optimization systems, CVRO designs a shorter route using SQA in a shorter span of time. SQA delivers the required parallelization to explore many possible routes simultaneously. When aggregated over big delivery fleets spread across geographies this translates to large cost savings hence impacts profitability.
Highlights
- Capacitated Vehicle Route Optimizer helps to plan the route for vehicles to supply a given number of customers as efficiently as possible while satisfying capacity constraint for each vehicle. The solution uses quantum simulators to find optimal plan for such problems with less computation effort/time than classical approach.
- This solution is applicable across various industries like logistics, supply chain, retail, e-commerce, transportation. Last mile delivery problem is an appropriate scenario for the application of CVRO.
- Need customized Quantum Computing solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.large Inference (Batch) Recommended | Model inference on the ml.m5.large instance type, batch mode | $40.00 |
ml.m5.large Inference (Real-Time) Recommended | Model inference on the ml.m5.large instance type, real-time mode | $20.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $40.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $40.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $40.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $40.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $40.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $40.00 |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $40.00 |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $40.00 |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This is version 3
Additional details
Inputs
- Summary
Usage Methodology for the algorithm:
- The input has to be a .csv file with the content in columns titled 'customer id', ‘x co-ordinates’, ‘y co-ordinates’, ‘demand’
- The file should follow 'utf-8' encoding.
- The input can have a maximum of 375 demand points.
- The first row should contain information of depot with id as 0 and demand column as capacity of each truck.( First row should contain depot and capacity of truck information)
- customer id: id of customer; x co-ordinates; y co-ordinates; demand: demand of each customers.
General instructions for consuming the service on Sagemaker:
- Access to AWS SageMaker and the model package
- An S3 bucket to specify input/output
- Role for AWS SageMaker to access input/output from S3
Input
Supported content types: text/csv
sample input
Customer id-|----X co-ordinates-----|----Y co-ordinates----|-----Demands--------| 0 35 35 200 1 41 49 10 2 35 17 7 3 55 45 13 ….
Output
Content type: text/csv
sample output
----cluster id----|-----------------route----------------------------------------|----route_cost---| 1 [0, 103, 161, 135, 65, 71, 136, 35, 9, 120, 164, 0] 130.62 2 [0, 175, 11, 107, 64, 49, 168, 47, 143, 19, 123, 0] 132.68 3 [0, 4, 197, 56, 186, 187, 139, 170, 67, 25, 165,.. 0] 145.59 …..
Invoking endpoint
AWS CLI Command
You can invoke endpoint using AWS CLI:
aws sagemaker-runtime invoke-endpoint --endpoint-name $model_name --body fileb://$file_name --content-type 'text/csv' --region us-east-2 output.csvSubstitute the following parameters:
- "endpoint-name" - name of the inference endpoint where the model is deployed
- input.csv - input file to do the inference on
- text/csv - Type of input data
- output.csv - filename where the inference results are written to
Resources
Sample Notebook : https://tinyurl.com/y29hue6q Sample Input : https://tinyurl.com/y33n8qgp Sample Output: https://tinyurl.com/yy5o8u6u
- Input MIME type
- text/csv, text/plain, application/zip
Resources
Vendor resources
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products
