
Overview
Trailer Capacity Prediction model takes images of open containers being loaded/unloaded as input and classifies them into categories - quarter, half or fully packed. It can be used to plan container loading/unloading and optimize/automate warehouse operations which in turn reduces truck waiting time at a hub/warehouse. It is built using state of the art deep learning modelling techniques to precisely classify images.
Highlights
- Logistics industry faces great risks in cargo damage, on-time delivery and optimum space utilization of shipping containers, resulting in loss of time and money. To improve on-time delivery and proper space utilization of containers, it is important to continuously supervise images of loading/unloading the containers.
- Trailer Capacity Prediction model classifies these images of open trailers into categories - empty, half or full packed. The prediction helps to monitor unutilized spaces in containers, plan container filling, and optimize warehousing operations.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need customized image analytics solutions? Get in touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost |
|---|---|---|
ml.c5.xlarge Inference (Batch) Recommended | Model inference on the ml.c5.xlarge instance type, batch mode | $16.00/host/hour |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $16.00/host/hour |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $16.00/host/hour |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $16.00/host/hour |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $16.00/host/hour |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $16.00/host/hour |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $16.00/host/hour |
ml.c5.2xlarge Inference (Batch) | Model inference on the ml.c5.2xlarge instance type, batch mode | $16.00/host/hour |
ml.p3.2xlarge Inference (Batch) | Model inference on the ml.p3.2xlarge instance type, batch mode | $16.00/host/hour |
ml.c4.2xlarge Inference (Batch) | Model inference on the ml.c4.2xlarge instance type, batch mode | $16.00/host/hour |
Vendor refund policy
Currently we do not support refunds, but you can cancel your subscription to the service at any time
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
- Bug fixes
- Changes to accommodate AWS related Sagemaker updates
Additional details
Inputs
- Summary
Prerequisites for consuming the service:
- Access to Model Package, SageMaker and S3 storage bucket.
- Open Trailer Container Images. (Refer to Sample Input linked below)
- Execution Role for the SageMaker session.
- Python Packages as listed in the Instructions Notebook linked below.
Input
Supported Content Type: 'application/json' (Image serialized to json as shown below in Python)
from PIL import Image import json import numpy as np img = Image.open('images/sample1.jpg').convert(mode = 'RGB') img = img.resize((300,300)) img = np.array(img).tolist() img_json = json.dumps({'instances': [{'input_image': img}]}) // If required can be written to file (Also can be found in Sample link below) with open('img.json', 'w') as f: f.write(img_json)Output
Content Type: 'application/json'
Sample Output & Interpretation:
{"predictions": [[0.04, 0.55, 0.41]]}-
Element 1 of the list represents probability of: Full Capacity (i.e ~ Above 75%)
-
Element 2 of the list represents probability of: Half Capacity (i.e ~ Above 25% and Below 50%)
-
Element 3 of the list represents probability of: Quarter Capacity (i.e ~ Below 25%)
Invoking Endpoint
If you are using real time inferencing, please create the endpoint first.
// Find detailed instructions in the Instructions Notebook lined below predictor = sage.RealTimePredictor(endpoint='endpoint name', content_type='application/json', sagemaker_session= sagemaker_session, ) prediction = predictor.predict(img_json)Python
aws sagemaker-runtime invoke-endpoint --endpoint-name "endpoint-name" --body fileb://img.json --content-type application/json --accept application/json out.jsonAWS CLI Command
Notebook Instructions:
- Download the Notebook from the link below onto a SageMaker Notebook Instance OR Install necessary packages on the desired compute resource.
- Bring in the input images for classification onto the SageMaker Notebook Instance OR on the desired compute resource and follow the instructions in the Notebook.
Resources
- Input MIME type
- application/json
Resources
Vendor resources
Support
Vendor support
For any assistance, please reach out to:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



