
Overview
Arcee-Lite was developed by Arcee.ai as part of the DistillKit open-source project. Despite its small size, Arcee-Lite demonstrates impressive performance, particularly in the MMLU (Massive Multitask Language Understanding) benchmark. Arcee-Lite is a distillation of the phi-3-medium 14B model into a Qwen2 1.5 model. It has a 32 KB context size.
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-arcee-lite-on-sagemaker.ipynb .
Highlights
- Arcee-Lite is suitable for a wide range of applications where a balance between model size and performance is crucial: * Embedded systems * Mobile applications * Edge computing * Resource-constrained environments
- Arcee-Lite showcases remarkable capabilities for its size: * Achieves a 55.93 score on the MMLU benchmark * Demonstrates exceptional performance across various tasks
- The model generates over 100 tokens per second on ml.g5.2xlarge.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.c6i.4xlarge Inference (Real-Time) Recommended | Model inference on the ml.c6i.4xlarge instance type, real-time mode | $0.00 |
ml.p3.8xlarge Inference (Batch) Recommended | Model inference on the ml.p3.8xlarge instance type, batch mode | $0.00 |
ml.c7i.4xlarge Inference (Real-Time) | Model inference on the ml.c7i.4xlarge instance type, real-time mode | $0.00 |
ml.g6.16xlarge Inference (Real-Time) | Model inference on the ml.g6.16xlarge instance type, real-time mode | $0.00 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $0.00 |
ml.g5.xlarge Inference (Real-Time) | Model inference on the ml.g5.xlarge instance type, real-time mode | $0.00 |
ml.g5.8xlarge Inference (Real-Time) | Model inference on the ml.g5.8xlarge instance type, real-time mode | $0.00 |
ml.g6.4xlarge Inference (Real-Time) | Model inference on the ml.g6.4xlarge instance type, real-time mode | $0.00 |
ml.g5.4xlarge Inference (Real-Time) | Model inference on the ml.g5.4xlarge instance type, real-time mode | $0.00 |
ml.g6.8xlarge Inference (Real-Time) | Model inference on the ml.g6.8xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This version is configured for single-GPU instances of the g5 and g6 families, as well as CPU instances of the c6i and c7i families. Context size is 4 KB and the OpenAI Messages API is enabled.
Additional details
Inputs
- Summary
You can invoke the model using the OpenAI Messages AI. Please see the sample notebook for details.
- Input MIME type
- application/json, application/jsonlines
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
OpenAI Messages API | Please see sample notebook. | Type: FreeText | Yes |
Resources
Vendor resources
Support
Vendor support
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-arcee-lite-on-sagemaker.ipynb . This is the best way to guarantee proper configuration.
Bugs, questions, feature requests: please create an issue in the aws-samples repository on Github.
Contact: julien@arcee.aiÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




