Five new Qwen models for coding agents and efficient reasoning are now available in Amazon SageMaker JumpStart

Posted on: Apr 21, 2026

Today, AWS announced the availability of Qwen3-Coder-Next, Qwen3-30B-A3B, Qwen3-30B-A3B-Thinking-2507, Qwen3-Coder-30B-A3B-Instruct, and Qwen3.5-4B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These five models from Qwen bring specialized capabilities spanning agentic coding, efficient reasoning, extended thinking, and multimodal understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure.

These models address different enterprise AI challenges with specialized capabilities:

Qwen3-Coder-Next excels at long-horizon reasoning, complex tool use, and recovery from execution failures, making it ideal for powering coding agents in CLI/IDE platforms.

Qwen3-30B-A3B uniquely supports seamless switching between thinking and non-thinking modes, making it well suited for general-purpose assistant tasks like multilingual dialogue, math reasoning, and tool calling.

Qwen3-30B-A3B-Thinking-2507 delivers significantly improved performance on complex reasoning tasks in math, science, and coding, with enhanced long-context understanding.

Qwen3-Coder-30B-A3B-Instruct is designed for agentic coding workflows with a custom function call format and repo-scale context understanding.

Qwen3.5-4B supports unified vision-language training and  201 languages, making it ideal for lightweight multimodal deployments.

With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.

To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.