Overview
The MOSTLY AI SDK is an open source library, which generates synthetic data that is highly representative, highly realistic, and considered 'as good as real'. While maintaining high accuracy and protecting the privacy of your subjects, you can openly process and share the generated synthetic data with others.
Highlights
- Privacy-safe synthetic data: Generate high-fidelity synthetic datasets that preserve statistical accuracy and relational integrity while protecting against re-identification risks.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.c5.4xlarge Inference (Batch) Recommended | Model inference on the ml.c5.4xlarge instance type, batch mode | $0.00 |
ml.c5.4xlarge Inference (Real-Time) Recommended | Model inference on the ml.c5.4xlarge instance type, real-time mode | $0.00 |
ml.c5.4xlarge Training Recommended | Algorithm training on the ml.c5.4xlarge instance type | $0.00 |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $0.00 |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $0.00 |
ml.m4.10xlarge Inference (Batch) | Model inference on the ml.m4.10xlarge instance type, batch mode | $0.00 |
ml.m4.16xlarge Inference (Batch) | Model inference on the ml.m4.16xlarge instance type, batch mode | $0.00 |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $0.00 |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $0.00 |
ml.m5.12xlarge Inference (Batch) | Model inference on the ml.m5.12xlarge instance type, batch mode | $0.00 |
Vendor refund policy
It is a free product, no refunds applicable
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker algorithm
An Amazon SageMaker algorithm is a machine learning model that requires your training data to make predictions. Use the included training algorithm to generate your unique model artifact. Then deploy the model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
Updating SDK to version: 4.7.8
Additional details
Inputs
- Summary
Training Inputs The training process can be provided 2 parameters:
- configFile: Is the relative path to the configuration json file. It should be in the format <channel>/<filename>, e.g. train/mostly_config.json.
- configJSON: A serialized JSON containing the configuration. This can only be used if the configuration is small to fit into a hyper parameters. Is an easy way to pass configurations to AWS Cleanrooms. Since the configuration can have both a train and generate definitions, you can use a train job to do both, like a processing job.
Inference/Transform Inputs
Inference input is a JSON structure with the a generate section. For details, see documentation at https://mostly-ai.github.io/mostlyai/api_domain/#mostlyai.sdk.domain.SyntheticDatasetConfigÂ
- Limitations for input type
- The input is a configuration JSON for the generation. It contains the information about the generator and has the MOSTLY AI SDK probing configuration. Both the inference and batch transformation read the input configuration and use MOSTLY AI's SDK to do realtime probing. Since this is a realtime operation, the generation size should be small enough to be completed in an API call.
- Input MIME type
- application/json
Resources
Vendor resources
Support
Vendor support
MOSTLY AI provides full lifecycle support for enterprise customers. Our support includes onboarding assistance, technical troubleshooting, and ongoing best-practice guidance. Buyers can expect: * Email-based support via support@mostly.ai * Access to product documentation and tutorials at https://mostly.ai/docs * Regular updates and enterprise-grade SLAs upon request
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products

