
Overview
Solar Pro 2 is Upstage's latest frontier-scale LLM. With just 31B parameters, it delivers top-tier performance through world-class multilingual support, advanced reasoning, and real-world tool use. Especially in Korean, it outperforms much larger models across critical benchmarks. Built for the next generation of practical LLMs, Solar Pro 2 proves that smaller models can still lead.
Solar Pro 2 supports users and businesses worldwide with its exceptional multilingual processing capabilities. It not only excels in Korean but also demonstrates impressive performance in major language benchmarks for English and Japanese, breaking down global communication barriers. See the table below for Solar Pro 2's outstanding achievements across various key language benchmarks.
Solar Pro 2 is optimized for intelligent interaction with external tools. Beyond merely understanding language, it acts much like a human, communicating with its environment, thinking autonomously, and executing necessary functions. The example below illustrates how Solar Pro 2 leverages tools in a business environment to transform complex requests into actionable outcomes.
Highlights
- **Frontier performance in a compact 31B model**: Delivers top-tier reasoning and problem-solving while keeping model size small - competing with much larger models despite only 31 billion parameters.
- **Best-in-class Korean & multilingual strength**: Shows exceptional Korean fluency and strong benchmark performance across Korean, English, and Japanese, matching or surpassing much larger top-tier models on key tasks.
- **Advanced reasoning and tool use for agent-like workflows**: Optimized for multi-step reasoning, complex math/code tasks, and seamless interaction with external tools to support real-world, agent-style enterprise workflows.
Details
Unlock automation with AI agent solutions

Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Description | Cost/host/hour |
|---|---|---|
ml.m5.12xlarge Inference (Batch) Recommended | Model inference on the ml.m5.12xlarge instance type, batch mode | $0.00 |
ml.p5.48xlarge Inference (Real-Time) Recommended | Model inference on the ml.p5.48xlarge instance type, real-time mode | $12.00 |
ml.p4d.24xlarge Inference (Real-Time) | Model inference on the ml.p4d.24xlarge instance type, real-time mode | $6.40 |
Vendor refund policy
We do not support any refunds currently.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
Amazon SageMaker model
An Amazon SageMaker model package is a pre-trained machine learning model ready to use without additional training. Use the model package to create a model on Amazon SageMaker for real-time inference or batch processing. Amazon SageMaker is a fully managed platform for building, training, and deploying machine learning models at scale.
Version release notes
We have enhanced the stability of containers during the initial bootstrapping process.
Additional details
Inputs
- Summary
We support a request payload that is compatible with OpenAI's chat completion endpoint. You can check the detail input paramter descrption at https://console.upstage.ai/api/chat .
- Input MIME type
- application/json
Resources
Vendor resources
Support
Vendor support
Contact us for any support & request. https://www.upstage.ai/contact-us?utm_source=marketplaceÂ
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.
Similar products




