Overview
This is a self-hosted deployment of the Qwen 3.6 35B-A3B large language model. It runs as a single GPU-powered EC2 instance allowing you to keep your data private and leverage unlimited tokens. Access to the model is via HTTPS, ensuring data is encrypted in-transit at all times. Highlights of the Qwen 3.6 35B model include:
-
Utilizes a sparse Mixture-of-Experts (MoE) architecture with 35 billion total parameters and only 3 billion active parameters to deliver flagship-level performance with massive efficiency.
-
Features exceptional agentic coding capabilities that surpass much larger dense models and excel at repository-level reasoning and terminal-based workflows.
-
Introduces the "preserve thinking" feature, which retains reasoning traces from all preceding conversation turns to improve consistency in complex, multi-step tasks.
-
Native multimodal support enables high-performance perception and reasoning across text, images, and video, particularly in tasks requiring spatial intelligence.
-
Supports a standard context window of 262K tokens, which can be extended up to 1 million tokens using specialized techniques like YaRN.
-
Incorporates Multi-Token Prediction (MTP) to enable speculative decoding, significantly increasing inference speed for structured data and code generation.
-
Provides flexible interaction through a native "Thinking Mode" for deep reasoning and a "Non-Thinking Mode" for instant, direct responses.
-
Fully open-source under the Apache 2.0 license, allowing for unrestricted commercial use and local deployment on consumer-grade hardware.
Highlights
- Data security, privacy, and confidentiality
- Predictable cost
- Unlimited usage of a dedicated model
Details
Introducing multi-product solutions
You can now purchase comprehensive solutions tailored to use cases and industries.
Features and programs
Financing for AWS Marketplace purchases
Pricing
Free trial
Dimension | Cost/hour |
|---|---|
g5.2xlarge Recommended | $0.09 |
Vendor refund policy
Refunds may be considered on a per-case basis. Please contact us at support at salientengineering.com for inquiries.
How can we make this page better?
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Configured for production environments. Ollama is exposed on port 443.
Additional details
Usage instructions
- Deploy the EC2 instance, configure the Security Group to only allow inbound port 22 and 443 from your trusted IP address(es)
- Access the Qwen 3.6 35B-A3B model via the Ollama service exposed on port 443. Example curl command: curl -X POST https://<elastic-ip>/api/generate
-d '{"model":"qwen3.6:35b-a3b","prompt":"In one sentence, explain what a large language model is capable of."}'
Resources
Vendor resources
Support
Vendor support
The Salient Engineering support team can be reached at: support@salientengineering.com
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.