
Overview
Arcee Spark offers a 32 KB context size. Initialized from Qwen2, it underwent a sophisticated training process:
- Fine-tuned on 1.8 million samples
- Merged with Qwen2-7B-Instruct using Arcee's mergekit
- Further refined using Direct Preference Optimization (DPO)
This meticulous process results in exceptional performance, with Arcee Spark achieving the highest score on MT-Bench for models of its size, outperforming even GPT-3.5 on many tasks.
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-arcee-spark-on-sagemaker.ipynbÂ
Highlights
- Arcee-Spark excels across a wide range of language tasks, demonstrating particular strength in: * Reasoning: Solving complex problems and drawing logical conclusions. * Creative Writing: Generating engaging and original content across various genres. * Coding: Assisting with programming tasks, from code generation to debugging. * General Language Understanding: Comprehending and generating human-like text in diverse contexts.
- Arcee-Spark can be applied to various business tasks: * Customer Service: Implement sophisticated chatbots and virtual assistants. * Content Creation: Generate high-quality written content for marketing and documentation. * Software Development: Accelerate coding processes and improve code quality. * Data Analysis: Enhance data interpretation and generate insightful reports. * Research and Development: Assist in literature reviews and hypothesis generation. * Legal and Compliance: Automate contract analysis and regulatory compliance checks.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.p3.8xlarge Inference (Batch) Recommended | Model inference on the ml.p3.8xlarge instance type, batch mode | $0.00 |
ml.g5.2xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.2xlarge instance type, real-time mode | $0.00 |
ml.p3.16xlarge Inference (Batch) | Model inference on the ml.p3.16xlarge instance type, batch mode | $0.00 |
ml.g6.16xlarge Inference (Real-Time) | Model inference on the ml.g6.16xlarge instance type, real-time mode | $0.00 |
ml.g6.2xlarge Inference (Real-Time) | Model inference on the ml.g6.2xlarge instance type, real-time mode | $0.00 |
ml.g5.xlarge Inference (Real-Time) | Model inference on the ml.g5.xlarge instance type, real-time mode | $0.00 |
ml.g5.8xlarge Inference (Real-Time) | Model inference on the ml.g5.8xlarge instance type, real-time mode | $0.00 |
ml.g6.4xlarge Inference (Real-Time) | Model inference on the ml.g6.4xlarge instance type, real-time mode | $0.00 |
ml.g5.4xlarge Inference (Real-Time) | Model inference on the ml.g5.4xlarge instance type, real-time mode | $0.00 |
ml.g6.8xlarge Inference (Real-Time) | Model inference on the ml.g6.8xlarge instance type, real-time mode | $0.00 |
Vendor refund policy
This product is offered for free. If there are any questions, please contact us for further clarifications.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
This version is configured for single-GPU instances of the g5 and g6 families. Context size is 4 KB and the OpenAI Messages API is enabled.
Additional details
Inputs
- Summary
You can invoke the model using the OpenAI Messages AI. Please see the sample notebook for details.
- Input MIME type
- application/json, application/jsonlines
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
OpenAI Messages API | Please see sample notebook. | Type: FreeText | Yes |
Resources
Vendor resources
Support
Vendor support
IMPORTANT INFORMATION: Once you have subscribed to the model, we strongly recommend that you deploy it with our sample notebook at https://github.com/arcee-ai/aws-samples/blob/main/model_package_notebooks/sample-notebook-arcee-spark-on-sagemaker.ipynb . This is the best way to guarantee proper configuration.
Bugs, questions, feature requests: please create an issue in the aws-samples repository on Github.
Contact: julien@arcee.aiÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




