Overview
Reduce inference costs and increase inference speed by using Multiverse Computing's CompactifAI Llama 3.3 70B Slim. A 50% compression of the widely known Meta Llama 3.3 70B model.
Highlights
- 50% less memory usage
- 50% Faster Inference
- Maintains 97% of the original model intelligence
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.12xlarge Inference (Batch) Recommended | Model inference on the ml.g5.12xlarge instance type, batch mode | $0.98 |
ml.g5.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.12xlarge instance type, real-time mode | $0.98 |
ml.g5.16xlarge Inference (Batch) | Model inference on the ml.g5.16xlarge instance type, batch mode | $0.74 |
ml.g5.24xlarge Inference (Batch) | Model inference on the ml.g5.24xlarge instance type, batch mode | $1.35 |
ml.g5.48xlarge Inference (Batch) | Model inference on the ml.g5.48xlarge instance type, batch mode | $2.57 |
ml.g5.16xlarge Inference (Real-Time) | Model inference on the ml.g5.16xlarge instance type, real-time mode | $0.74 |
ml.g5.24xlarge Inference (Real-Time) | Model inference on the ml.g5.24xlarge instance type, real-time mode | $1.35 |
ml.g5.48xlarge Inference (Real-Time) | Model inference on the ml.g5.48xlarge instance type, real-time mode | $2.57 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $3.15 |
ml.p5.48xlarge Inference (Real-Time) | Model inference on the ml.p5.48xlarge instance type, real-time mode | $7.72 |
Vendor refund policy
Contact our support service via: support@multiversecomputing.com
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
CompactifAI Llama 3.3 70B inference on a vLLM engine
Additional details
Inputs
- Summary
Our model accepts an input compatible with openAI's chat completions endpoint.
- Limitations for input type
- Context length is limited by available GPU resources. Selecting a smaller instance will cap the model's maximum context.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
messages | An array of dictionaries, each dictionary must follow this format:
{
"role": "system" | "user" | "assistant",
"content": "The message content"
} | Context length scales with available GPU resources; smaller instances will impose a cap. Please account for this when sizing your messages array. To utilize the model's full context capacity, opt for a larger instance with more VRAM. | Yes |
model | A valid model name: `cai-llama-3-3-70b-slim` | - | Yes |
Support
Vendor support
Send an email to support@compactif.ai and provide a description of your issue
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.