
Overview
This is a quantized version of Solar Pro. Solar Pro is a cutting-edge LLM engineered for enterprise needs, offering exceptional performance on a single GPU. It has a superior instruction-following capability, delivering outstanding accuracy in understanding and executing complex instructions.
Highlights
- **Advanced Structured Text Understanding**: Excels in processing structured formats such as HTML, Markdown, and tables.
- **Leading Multilingual Performance**: Achieves top-tier results in Korean, English, and Japanese General Intelligence among single-GPU models.
- **Domain-Specific Expertise**: Demonstrates unparalleled knowledge in critical enterprise domains, including Finance, Healthcare, and Law, among the models fit in single GPU.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.12xlarge Inference (Batch) Recommended | Model inference on the ml.m5.12xlarge instance type, batch mode | $0.00 |
ml.g5.12xlarge Inference (Real-Time) Recommended | Model inference on the ml.g5.12xlarge instance type, real-time mode | $1.60 |
ml.g4dn.12xlarge Inference (Real-Time) | Model inference on the ml.g4dn.12xlarge instance type, real-time mode | $1.60 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $6.40 |
Vendor refund policy
We do not support any refunds currently.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
New version
Additional details
Inputs
- Summary
We support a request payload that is compatible with OpenAI's chat completion endpoint.
- Input MIME type
- application/json
Input data descriptions
The following table describes supported input data fields for real-time inference and batch transform.
Field name | Description | Constraints | Required |
|---|---|---|---|
model | Name of the model. Always 'solar-pro'. | Type: FreeText | No |
messages | List of messages that contains role and content. Role must be one of [system, user, assistant]. | Type: FreeText Limitations: You must provide list of messages that contains role and content. Role must be one of [system, user, assistant]. | Yes |
frequency_penalty | A value between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text, reducing the model's likelihood of repeating the same content verbatim. | Default value: 0.0 Type: Continuous Minimum: -2.0 Maximum: 2.0 | No |
presence_penalty | A value between -2.0 and 2.0. Positive values penalize new tokens based on their presence in the existing text, increasing the model's likelihood of introducing new topics. | Default value: 0.0 Type: Continuous Minimum: -2.0 Maximum: 2.0 | No |
max_tokens | The maximum number of tokens that can be generated in the chat completion. Solar Pro supports a maximum context of 4k(4096) tokens for input and generated tokens. | Default value: 16 Type: Integer Minimum: 0 Maximum: 4096 | No |
temperature | The sampling temperature to use, ranging from 0 to 2. Higher values (e.g., 0.8) increase randomness in the output, while lower values (e.g., 0.2) produce more focused and deterministic results. | Default value: 1.0 Type: Continuous Minimum: 0.0 Maximum: 2.0 | No |
top_p | Nucleus sampling is an alternative to temperature sampling. It considers only the tokens comprising the top p probability mass. For example, a top_p value of 0.1 means only the tokens making up the top 10% probability mass are considered. | Default value: 1.0 Type: Continuous Minimum: 0.0 Maximum: 1.0 | No |
stream | Specifies whether to stream the response. | Default value: false Type: Categorical Allowed values: true,false | No |
Resources
Vendor resources
Support
Vendor support
Contact us for model fine-tuning request.
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products



