
Overview
This unique model is the result of a cross-architecture distillation pipeline, combining knowledge from both the Qwen2.5-72B-Instruct model and the Llama-3.1-405B-Instruct model. By leveraging the strengths of these two distinct architectures, SuperNova-Medius achieves high-quality instruction-following and complex reasoning capabilities in a mid-sized, resource-efficient form.
SuperNova-Medius performs exceptionally well in instruction-following (IFEval) and complex reasoning tasks (BBH), demonstrating its capability to handle a variety of real-world scenarios. It outperforms Qwen2.5-14B and SuperNova-Lite in multiple benchmarks, making it a powerful yet efficient choice for high-quality generative AI applications.
Highlights
- Customer Support: With its robust instruction-following and dialogue management capabilities, SuperNova-Medius can handle complex customer interactions, reducing the need for human intervention.
- Content Creation: The model’s advanced language understanding and generation abilities make it ideal for creating high-quality, coherent content across diverse domains.
- Technical Assistance: SuperNova-Medius has a deep reservoir of technical knowledge, making it an excellent assistant for programming, technical documentation, and other expert-level content creation.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.g5.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.12xlarge instance type, real-time mode | $0.00 |
ml.p3.8xlarge Inference (Batch) Recommended | Model inference on the ml.p3.8xlarge instance type, batch mode | $0.00 |
ml.g6.12xlarge Inference (Real-Time) | Model inference on the ml.g6.12xlarge instance type, real-time mode | $0.00 |
ml.p4de.24xlarge Inference (Real-Time) | Model inference on the ml.p4de.24xlarge instance type, real-time mode | $0.00 |
ml.g6.24xlarge Inference (Real-Time) | Model inference on the ml.g6.24xlarge instance type, real-time mode | $0.00 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $0.00 |
ml.p5.48xlarge Inference (Real-Time) | Model inference on the ml.p5.48xlarge instance type, real-time mode | $0.00 |
ml.g5.24xlarge Inference (Real-Time) | Model inference on the ml.g5.24xlarge instance type, real-time mode | $0.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This version is configured for 4-GPU and 8-GPU instances in the g5, g6, p4, and p5 families. Context size is 32 KB and the OpenAI Messages API is enabled.
Additional details
Inputs
- Summary
You can invoke the model using the OpenAI Messages AI. Please see the sample notebook for details.
- Input MIME type
- application/json, application/jsonlines
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
OpenAI Messages API | Please see sample notebook. | Type: FreeText | Yes |
Resources
Vendor resources
Support
Vendor support
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-supernova-medius-on-sagemaker.ipynb . This is the best way to guarantee proper configuration.
Contact: julien@arcee.aiÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




