Complex Reasoning
Multi-step logic, mathematical proofs, and intricate problem-solving.
Qwen3.5-397B-A17B is the flagship Qwen 3.5 model — maximum reasoning power for the most demanding tasks. Try it free.
Qwen3.5-397B-A17B is the default model for this page. Flagship large-scale Qwen 3.5 model for demanding reasoning and long-form tasks.
Starter prompts
Free to use on this site via OpenRouter.
Qwen3.5-397B-A17B is the largest public open-weight release in the Qwen 3.5 line. Its published scores are strong on reasoning, coding, search-agent, and multilingual tasks, while its hosted sibling Qwen3.5-Plus adds a 1M default context window and extra production features.
This model posts 87.8 on MMLU-Pro, 88.4 on GPQA, and 83.6 on LiveCodeBench v6.
397B total parameters provide an enormous knowledge base for expert routing.
Full Apache 2.0 license — download, fine-tune, and deploy without restrictions.
How Qwen3.5-397B-A17B compares to nearby models in the Qwen family.
Mid-tier MoE model for deeper reasoning and agent tasks.
Flagship open-weight Qwen3.5 model, also the base model behind Qwen3.5-Plus.
Hosted version built on Qwen3.5-397B-A17B with additional tooling and a 1M context window.
Scores reference the Qwen3.5-397B-A17B base model.
Scores are from public model cards and the qwen.ai release page. Hosted models are labeled with their open-weight base.
Updated 2026-04-02Use the flagship when you need the highest quality output and reasoning depth.
Multi-step logic, mathematical proofs, and intricate problem-solving.
Architecture design, complex refactoring, and multi-repo reasoning.
Process and reason about research papers, datasets, and technical content.
Business analysis, decision frameworks, and comprehensive project planning.
High-quality long-form creative content with nuance and depth.
When you need the absolute best score on evaluation benchmarks.
Common questions about the flagship model.
It is the largest public open-weight Qwen3.5 release and posts the strongest published benchmark scores among the open model cards on this site. Smaller models can still be faster, cheaper, or easier to operate.
It requires significant infrastructure — multiple high-end GPUs with large VRAM. Most users access it through cloud APIs or services like OpenRouter.
Qwen's published benchmark table places it alongside frontier closed-source models across reasoning, agent, and multilingual evaluations. The exact gap still depends on which benchmark and workflow you care about.
Qwen3.5-Plus offers strong all-around performance with simpler deployment. Use 397B when you specifically need the highest reasoning ceiling.
The full model requires 200+ GB in BF16. Quantized versions can run on 4-8 GPU setups. Cloud deployment via vLLM is the most common approach.
Qwen3.5-Plus is the hosted version built on 397B-A17B. Plus adds production tooling and a 1M context window. The 397B-A17B weights are the open-weight version you can download.
For simple Q&A, drafting, or template generation, smaller models like 9B or 27B are faster and cheaper. Use 397B-A17B when you need the deepest reasoning or best code quality.
Yes. All Qwen 3.5 models support function calling. The 397B-A17B checkpoint is especially strong when tool use depends on long multi-step reasoning.