Qwen3.5-397B-A17B — Flagship Model for Maximum Reasoning

Qwen3.5-397B-A17B is the flagship Qwen 3.5 model — maximum reasoning power for the most demanding tasks. Try it free.

Ready To Chat
Qwen3.5-397B-A17B
Online
Thinking

Qwen3.5-397B-A17B is ready

Qwen3.5-397B-A17B is the default model for this page. Flagship large-scale Qwen 3.5 model for demanding reasoning and long-form tasks.

Pick a model, decide whether this needs web search or thinking, then start with a real prompt.
Flagship
MoE

Starter prompts

Free to use on this site via OpenRouter.

Total Params
397B
Active Params
17B
Context
262K native
License
Apache 2.0
Overview

The Top of the Qwen 3.5 Lineup

Qwen3.5-397B-A17B is the largest public open-weight release in the Qwen 3.5 line. Its published scores are strong on reasoning, coding, search-agent, and multilingual tasks, while its hosted sibling Qwen3.5-Plus adds a 1M default context window and extra production features.

Current Public Scores

This model posts 87.8 on MMLU-Pro, 88.4 on GPQA, and 83.6 on LiveCodeBench v6.

Massive Expert Pool

397B total parameters provide an enormous knowledge base for expert routing.

Open Weight

Full Apache 2.0 license — download, fine-tune, and deploy without restrictions.

Qwen3.5-397B-A17B Benchmark

How Qwen3.5-397B-A17B compares to nearby models in the Qwen family.

Qwen3.5-122B-A10B

Mid-tier MoE model for deeper reasoning and agent tasks.

Updated 2026-04-02
MMLU-Pro
86.7
GPQA / GPQA-family
86.6
LiveCodeBench v6
78.9

Qwen3.5-397B-A17B

Flagship open-weight Qwen3.5 model, also the base model behind Qwen3.5-Plus.

Updated 2026-04-02
MMLU-Pro
87.8
GPQA / GPQA-family
88.4
LiveCodeBench v6
83.6

Qwen3.5-Plus

Hosted

Hosted version built on Qwen3.5-397B-A17B with additional tooling and a 1M context window.

Scores reference the Qwen3.5-397B-A17B base model.

Updated 2026-04-02
MMLU-Pro
87.8
GPQA / GPQA-family
88.4
LiveCodeBench v6
83.6

Scores are from public model cards and the qwen.ai release page. Hosted models are labeled with their open-weight base.

Updated 2026-04-02
Use Cases

What Qwen3.5-397B-A17B Is Best For

Use the flagship when you need the highest quality output and reasoning depth.

Complex Reasoning

Multi-step logic, mathematical proofs, and intricate problem-solving.

Expert Coding

Architecture design, complex refactoring, and multi-repo reasoning.

Scientific Analysis

Process and reason about research papers, datasets, and technical content.

Strategic Planning

Business analysis, decision frameworks, and comprehensive project planning.

Creative Writing

High-quality long-form creative content with nuance and depth.

Benchmark Tasks

When you need the absolute best score on evaluation benchmarks.

FAQ

Qwen3.5-397B-A17B FAQ

Common questions about the flagship model.

1

Is 397B the best Qwen 3.5 model?

It is the largest public open-weight Qwen3.5 release and posts the strongest published benchmark scores among the open model cards on this site. Smaller models can still be faster, cheaper, or easier to operate.

2

Can I run it locally?

It requires significant infrastructure — multiple high-end GPUs with large VRAM. Most users access it through cloud APIs or services like OpenRouter.

3

How does it compare to closed-source models?

Qwen's published benchmark table places it alongside frontier closed-source models across reasoning, agent, and multilingual evaluations. The exact gap still depends on which benchmark and workflow you care about.

4

When should I use Plus instead?

Qwen3.5-Plus offers strong all-around performance with simpler deployment. Use 397B when you specifically need the highest reasoning ceiling.

5

How much VRAM does 397B-A17B need?

The full model requires 200+ GB in BF16. Quantized versions can run on 4-8 GPU setups. Cloud deployment via vLLM is the most common approach.

6

Is 397B-A17B the same as Qwen3.5-Plus?

Qwen3.5-Plus is the hosted version built on 397B-A17B. Plus adds production tooling and a 1M context window. The 397B-A17B weights are the open-weight version you can download.

7

When is 397B-A17B overkill?

For simple Q&A, drafting, or template generation, smaller models like 9B or 27B are faster and cheaper. Use 397B-A17B when you need the deepest reasoning or best code quality.

8

Does 397B-A17B support tool calling?

Yes. All Qwen 3.5 models support function calling. The 397B-A17B checkpoint is especially strong when tool use depends on long multi-step reasoning.