Overview
Tailored for tech artists and developers shaping 3D pipelines, RapidPipeline 3D processor stands out as the ultimate in flexibility, scalability, and customization. Our automation software grows seamlessly with your expanding use cases, delivering top-tier quality at the most economical cost and minimal carbon footprint. Elevate your 3D pipelines with unmatched efficiency.
RapidPipeline 3D processor has many capabilities starting with topology optimization (decimation, remeshing), reducing draw calls by scene graph optimization, texture baking, unwrapping UVs, renders preview images of your 3D assets, using a CPU-based raycaster, and much more. Our proprietary optimization algorithms were developed with top tier engineers and 3D graphics PhD researchers with over a decade of experience in the field. We combine this technology and know-how with the latest compression technologies to provide optimized results with an unmatched speed and efficiency.
All these features are neatly packaged into our software and can be set up as configuration settings, to enable automatic batch processing on whole 3D asset libraries with one-click delivery for any use case. RapidPipeline 3D processor is battle-tested on millions of production assets, making it a proven solution in scalable 3D pipelines.
Highlights
- Optimize your 3D scenes in various ways with RapidPipeline 3D processor's flexible feature set depending on your input data and end goals. Whether you need to reduce the poly count, re-do the topology, bake textures, cull invisible geometry, flatten the scene to reduce draw calls, or preserve hierarchy tree to use with product configurators, our tools have you covered.
- Achieve better real-time performance and hardware utilisation, as well as significantly faster loading times, by leveraging asset simplification algorithms provided by RapidPipeline 3D processor.
- RapidPipeline 3D processor has you covered by supporting a wide range of 3D data formats such as .fbx, .obj, .gltf, .usd, .usdz, and .vrm. Amongst other formats, our software also supports .ktx and .webp compression for texture maps.
Details
Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/unit |
---|---|---|
Process | A RapidPipeline 3D Processor operation | $2.00 |
Vendor refund policy
Please refer to EULA.
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Docker Image
- Amazon ECS
Container image
Containers are lightweight, portable execution environments that wrap server application software in a filesystem that includes everything it needs to run. Container applications run on supported container runtimes and orchestration services, such as Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). Both eliminate the need for you to install and operate your own container orchestration software by managing and scheduling containers on a scalable cluster of virtual machines.
Version release notes
Initial Release
Additional details
Usage instructions
You have the choice of deployment via CDK (recommended) or manual deployment.
Find our example CDK stack here: https://github.com/DGG3D/marketplace-deployment-example
In case you want to deploy manually:
- Subscribe to RapidCompact https://aws.amazon.com/marketplace/pp/prodview-zdg4blxeviyyi
- Create S3 Buckets: Create one S3 bucket to store your input models and another S3 bucket to store your output models. Ressource on how to create S3 buckets: https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html
- Create Fargate Cluster: Open the ECS console, create a new AWS Fargate (serverless) cluster.
- Create a new Policy: Go to IAM and create a new Policy (JSON) with the following content (make sure to adjust INPUT-BUCKET and OUTPUT-BUCKET with the respective bucket names): https://raw.githubusercontent.com/DGG3D/marketplace-deployment-example/main/policy.json
- Create Task Role: Create a new IAM Role and select the previously created policy
- Task Definition: Create a new task definition
- Provide a name
- Select launch type: AWS Fargate
- Select OS: Linux/X86_64
- Choose Task size based on your model requirements: For simple models 1vCPU + 3 GB RAM is enough, for more complex models you might want to increase this.
- Task Role: Select the IAM role you created in step 4
- Container: Use 709825985650.dkr.ecr.us-east-1.amazonaws.com/darmstadt-graphics-group-gmbh/rapidcompact-renewal:0.0.4 as Image URI
- Enable "Use log collection" and select Amazon CloudWatch
- Navigate to your newly created Task Definition and note the container name for running the task later
- Choose a public subnet for your task to run in
- Navigate to AWS VPC, there should already be a default VPC, if you prefer to use a different VPC note it's ID
- In the sidebar navigate to 'Subnets', choose a subnet that is in your VPC and note it's ID as well (you can also create a new subnet here if necessary, resource on VPC Subnets: https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html )
- Choose or create a security group
- Navigate to AWS EC2 -> Security Groups
- Choose a security group or create a new one and note it's ID (it can be as restrictive as required, this will not affect functionality)
- Run Task through AWS CLI:
- Install AWS CLI and authenticate it using your account's credentials: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
- Make sure your input model is uploaded to your input bucket
- Then adjust the following command based on your CLUSTER_NAME (of the cluster we created), TASK_DEFINITION_NAME (name of the task definition we created in step 5), CONTAINER_NAME (the name of the container to run the task on, see step 5), SUBNET (the ID of the subnet to run the task in, see step 6), SECURITYGROUP (the security group to run the task with, see step 7), INPUT_BUCKET, OUTPUT_BUCKET, INPUT_FILENAME (make sure your input bucket contains your input file with this name), OUTPUT_FILENAME (you can choose this freely as it will be created on the output bucket):
aws ecs run-task
--cluster CLUSTER_NAME
--task-definition TASK_DEFINITION_NAME
--launch-type FARGATE
--network-configuration "awsvpcConfiguration={subnets=[SUBNET],securityGroups=[SECURITYGROUP],assignPublicIp=ENABLED}"
--overrides '{"containerOverrides":[{"name":"CONTAINER_NAME", "command":["/bin/sh", "-c", "aws s3 cp s3://INPUT-BUCKET/INPUT_FILENAME . && /rpdx/rpdx -i INPUT_FILENAME -e OUTPUT_FILENAME && aws s3 cp OUTPUT_FILENAME s3://OUTPUT_BUCKET/OUTPUT_FILENAME"]}]}'
```
10. Monitor and Support: Monitor task execution with CloudWatch logs and reach out for support via AWS Marketplace.
11. More Examples: For an example CDK stack with the proposed setup configured see here: https://github.com/DGG3D/marketplace-deployment-example
Resources
Vendor resources
Support
Vendor support
For support requests, contact us at support@dgg3d.com or info@dgg3d.com
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.