Code review and refactor planning
Use it to inspect diffs, propose test coverage, or turn a rough patch idea into a concrete implementation plan.
Qwen3.6-27B is the open-weight dense model in the Qwen 3.6 lineup. It matters when you want flagship-level coding quality without jumping straight to the biggest hosted preview path.
Qwen3.6-27B is the default model for this page. Open-weight dense Qwen 3.6 model that pushes strong coding quality without jumping to the heaviest hosted tier.
Starter prompts
This is the Qwen 3.6 page to open when you care about coding quality, open weights, and a denser model profile than the MoE path.
Qwen3.6-27B is not just a smaller sibling. The official release positions it as a strong open-weight dense model with unusually high coding quality for its size, which makes it interesting for both API usage and practical evaluation.
All parameters stay active per token, which makes the model easier to reason about than the MoE route.
Official messaging emphasizes flagship-level coding quality and benchmark strength, not just general chat improvements.
Useful when you want a serious Qwen 3.6 model you can evaluate, compare, and potentially run outside the hosted stack.
Best for teams that want strong coding and reasoning quality from an open-weight dense model, not just a generic preview demo.
Use it to inspect diffs, propose test coverage, or turn a rough patch idea into a concrete implementation plan.
Generate JSON checklists, acceptance criteria, migration steps, or release notes without losing format discipline.
A better pick than a hosted-only model when you want a benchmark-worthy dense release to compare locally or in labs.
The 262K native context is plenty for realistic product specs, API docs, and multi-part technical prompts.
Use it when you specifically want to compare the dense 27B route against the 35B-A3B MoE path.
A strong entry point when you want Qwen 3.6 quality without defaulting to the highest-cost hosted tier.
Quick answers about the new open-weight dense Qwen 3.6 model.
27B is a dense model. 35B-A3B is MoE. Dense models are simpler to reason about per token, while MoE models trade that for more total capacity at lower active compute.
Yes. The official release is positioned as an open-weight Apache 2.0 model.
Because 27B gives you a strong open-weight dense path. Max-Preview is for the highest hosted ceiling; 27B is for practical evaluation, open deployment, and strong coding quality.
Coding is the headline, but the real point is strong dense-model reasoning and structured output at a more practical size.
Choose 27B when open weights, dense-model behavior, or cost-aware evaluation matter. Choose Plus when you want the steadier hosted all-rounder.
Start with Flash. 27B is not the speed-first path.