Overview
Our LLM-Based Classification Solution is an intelligent automation tool that leverages advanced language models and expertly crafted prompts to categorize text content with high accuracy. This solution is powered by DSPy and is an adaptive AI tool that automatically optimizes prompts based on your specific data to deliver superior classification accuracy. Unlike generic classification tools, our system learns from your unique dataset to create custom-tuned prompts that understand your business context and terminology. These prompts evolve as you provide more data samples and uses claude to optimize and finally classify your data.
Highlights
- DSPy automatically generates, tests, and refines prompts through systematic optimization no AI expertise or prompt engineering skills required on your end.
- By generating precisely-tuned prompts, the system reduces unnecessary LLM token consumption, directly lowering your operational costs while maintaining superior performance.
- Mphasis DeepInsights is a cloud-based cognitive computing platform that offers data extraction & predictive analytics capabilities. Need Customized Deep learning and Machine Learning Solutions? Get in Touch!
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost |
|---|---|---|
ml.m5.xlarge Inference (Batch) Recommended | Model inference on the ml.m5.xlarge instance type, batch mode | $5.00/host/hour |
ml.m5.large Inference (Batch) | Model inference on the ml.m5.large instance type, batch mode | $5.00/host/hour |
ml.m5.2xlarge Inference (Batch) | Model inference on the ml.m5.2xlarge instance type, batch mode | $5.00/host/hour |
ml.m5.4xlarge Inference (Batch) | Model inference on the ml.m5.4xlarge instance type, batch mode | $5.00/host/hour |
ml.m5.12xlarge Inference (Batch) | Model inference on the ml.m5.12xlarge instance type, batch mode | $5.00/host/hour |
ml.m5.24xlarge Inference (Batch) | Model inference on the ml.m5.24xlarge instance type, batch mode | $5.00/host/hour |
ml.m4.xlarge Inference (Batch) | Model inference on the ml.m4.xlarge instance type, batch mode | $5.00/host/hour |
ml.m4.2xlarge Inference (Batch) | Model inference on the ml.m4.2xlarge instance type, batch mode | $5.00/host/hour |
ml.m4.4xlarge Inference (Batch) | Model inference on the ml.m4.4xlarge instance type, batch mode | $5.00/host/hour |
ml.m4.10xlarge Inference (Batch) | Model inference on the ml.m4.10xlarge instance type, batch mode | $5.00/host/hour |
Vendor refund policy
Currently we dont offer any refunds, but you cancel your subscription to the service anytime.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
v1
Additional details
Inputs
- Summary
This solution requires 3 input files
- train.csv which is the data used to generate the optimized prompt.
- test.csv which is the data used to test the final optimized prompt.
- credentails.json which contains the following keys
- { "aws_access_key_id": "", "aws_secret_access_key": "", "region_name": "", "model_name": "", "target": ["List of your target values"], "base_prompt":"Your initial prompt defined based on your usecase." }
- Input MIME type
- application/zip
Support
Vendor support
For any assistance reach out to us at:
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.