Overview
The solution provides a streamlined, end-to-end workflow for mass deployment of deep learning models to the edge. It leverages a serverless compilation pipeline that is automatically triggered by ONNX model uploads, followed by the deployment of NPU packages (drivers runtime, and models) to edge devices via AWS IoT Greengrass. This enables efficient management and remote triggering of AI/ML demos on IoT devices.
Highlights
- Effortless Deployment: Our solution provides a streamlined workflow for the mass deployment of deep learning models directly to edge devices, simplifying the process from compilation to delivery.
- Accelerate Innovation: Empower your business to quickly and efficiently deploy AI models at the edge, reducing operational overhead and accelerating time-to-value.
- Automated and Scalable: Leverage a serverless pipeline that automatically deploys deep learning models to your IoT devices, enabling efficient management and remote demos at scale.
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Vendor refund policy
N/A
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
DX Compiler Serverless Orchestrated
- Launches a temporary EC2 instance based on the supplied DX Compiler AMI.
- Uses AWS Systems Manager (SSM) to download the model, run the compiler, and upload the compiled DXNN artifact back to S3 (suffix _compiled.dxnn).
- The EC2 instance self-terminates after successful compilation; if any failure occurs, the workflow forcibly terminates the instance for cost control.
- No inbound network ports are opened (outbound-only security group).
- IAM roles follow least-privilege principles: EC2 only has S3 read/write for the designated bucket plus SSM, and Step Functions has scoped EC2/SSM orchestration permissions.
Features: • Event-driven compilation (no always-on servers) • Deterministic shutdown and cost efficiency • Retry and failure handling (EC2 run & SSM command) • Clean separation of trigger, orchestration, execution • Marketplace-compliant parameterization (ImageId, VPC/Subnet supplied by buyer)
Use Cases: Automated model build pipelines, batch conversion of ONNX assets, reproducible compiler runs.
CloudFormation Template (CFT)
AWS CloudFormation templates are JSON or YAML-formatted text files that simplify provisioning and management on AWS. The templates describe the service or application architecture you want to deploy, and AWS CloudFormation uses those templates to provision and configure the required services (such as Amazon EC2 instances or Amazon RDS DB instances). The deployed application and associated resources are called a "stack."
Version release notes
- Added Step Functions orchestrated workflow (EC2 + SSM + self terminate)
- Added S3 event driven compilation trigger (.onnx to _compiled.dxnn)
- Enforced least privilege IAM and outbound only security group (no SSH/RDP ingress)
- Added automatic instance self termination + fallback forced termination
- Marketplace compliant CloudFormation (ImageId parameter, no hardcoded AMI mappings)
- Improved error handling & retries (EC2 run / SSM command polling)
Additional details
Usage instructions
- Deploy the stack. Provide: ImageId (prefilled by Marketplace), VpcId & SubnetId (public or NAT egress to S3 + SSM), Unique S3 bucket name (ModelBucketName), InstanceType (default t3.xlarge; increase for large models)
- After CREATE_COMPLETE, navigate to the output value ModelBucketName.
- Upload an ONNX file (e.g. mymodel.onnx) to the bucket root.
- A Step Functions execution (name prefix dx compile) starts automatically.
- Wait for the compiled artifact mymodel_compiled.dxnn to appear in the same bucket.
- Monitor progress: AWS Console to Step Functions to State machine ARN (from Outputs).
- All EC2 instances terminate automatically (either self terminate on success or forced termination on failure).
- To rerun, just upload more .onnx files. Security Notes:
- No inbound ports are exposed.
- Instance has only scoped S3 + SSM + terminate permissions.
- Remove files or version them per your data governance policies. Troubleshooting:
- If no execution starts: confirm S3 event (object created) & Lambda CloudWatch logs.
- If stuck in InProgress: check SSM command logs (AWS Systems Manager to Run Command to Command history).
- If no output file: check compiler step logs; ensure ONNX model is valid.
Support
Vendor support
Email: info@deepx.ai Support URL: https://deepx.ai/contact-us/ We are committed to helping our customers succeed with their AI projects. Our support includes:
- Dedicated Technical Assistance: Get expert guidance on a wide range of topics, including product integration, developer questions, and model optimization.
- Comprehensive Resources: We provide access to a developer portal, quick-start guides, and online video courses to help you get up and running quickly.
- Scalable Support Plans: Whether you're a developer prototyping with the DX TechBridge Kit or a business transitioning to mass production, we offer tailored support plans to meet your specific needs and ensure seamless deployment.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.