Alibaba launches Qwen3.6-27B, an open-weight dense model with 27B parameters, saying it surpasses Qwen3.5-397B-A17B on major coding benchmarks
· 4226 words · QwenTeam丨Translations:.体中文 — HUGGING FACE — MODELSCOPE — DISCORD
Qwen
Related Coverage
- Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model (via) Big claims from Qwen about their latest open weight model: … Simon Willison's Weblog · Simon Willison
- [AINews] Tasteful Tokenmaxxing Latent.Space
- Alibaba Ships Qwen3.6-27B, an Open-Weight Coding Model That Beats Its 397B MoE Implicator.ai · Harkaram Grewal
- Alibaba Qwen Team Releases Qwen3.6-27B: A Dense Open-Weight Model Outperforming 397B MoE on Agentic Coding Benchmarks MarkTechPost · Asif Razzaq
- Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model Hacker News
- The new Qwen3.6-27B just gave me definitely the best pelican riding a bicycle I've had from a 16.8GB model file! https://simonwillison.net/... @simon@fedi.simonwillison.net · Simon Willison
Discussion
-
@alibaba_qwen
@alibaba_qwen
on x
🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What's new: 🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks 💡 Strong [image]
-
@ollama
@ollama
on x
Qwen 3.6 27B model is available on Ollama! Use it with all the integrations in Ollama or chat with the model. Chat with the model: ollama run qwen3.6:27b OpenClaw: ollama launch openclaw —model qwen3.6:27b Claude Code: ollama launch claude —model qwen3.6:27b More
-
@kylehessling1
Kyle Hessling
on x
Guys, I am absolutely astounded. The Qwen 3.6 27b is like a jump to Qwen 4 from Qwen 27B 3.5. I just did a full suite of front end design tests and agentic benchmarks, made entirely by it. VERDICT: They're so much better than I thought they'd be, like I'm completely astounded. I
-
@alibaba_qwen
@alibaba_qwen
on x
LM Performance:With only 27B parameters, Qwen3.6-27B outperforms the Qwen3.5-397B-A17B (397B total / 17B active, ~15x larger!) on every major coding benchmark — including SWE-bench Verified (77.2 vs. 76.2), SWE-bench Pro (53.5 vs. 50.9), Terminal-Bench 2.0 (59.3 vs. 52.5), and [i…
-
@alibaba_qwen
@alibaba_qwen
on x
VLM Performance:Qwen3.6-27B is natively multimodal, supporting both vision-language thinking and non-thinking modes in a single unified checkpoint — the same as Qwen3.6-35B-A3B. It handles images and video alongside text, enabling multimodal reasoning, document understanding, [im…
-
@onlyterp
Terp
on x
ok.... So this just happened Qwen 3.6 27b running locally on my 5090 straight up beating mimo v2.5 pro 😭 [image]
-
@hxiao
Han Xiao
on x
With 3.6-27b release, the dense-over-MoE gap is shrinking, which is good for local AI as MoE like 35b-a3b are more friendly on low-budget GPU and support much longer context (256k full easily on 24gb vram). Same-scale comparison (27B dense vs 35B-A3B MoE): dense still wins most […
-
@suchenzang
Susan Zhang
on x
“china is becoming more closed-source” - cope
-
@sudoingx
@sudoingx
on x
this was supposed to be a normal evening, then i saw on the timeline that qwen 3.6 27b dense q4 weights from unsloth are live and i could not sit still. compiled llama.cpp with cuda on the single rtx 3090 at 2am from bangkok, launched with the exact same flags that crowned [image…
-
@cgtwts
@cgtwts
on x
Qwen just dropped Qwen3.6-27B >open source >a dense 27b model >beats their own 397B flagship on coding >14x smaller and easier to run >strong at agentic coding >handles both text and images >has fast mode and deep thinking mode >much cheaper to run locally [video]
-
@sudoingx
@sudoingx
on x
okay this is absolutely insane. my undisputed king qwen 3.5-27b dense on single RTX 3090 just got replaced by the same team today. qwen drops 3.6-27b dense just now and the chart says it beats its predecessor on every single benchmark, beats qwen 3.5-397b-a17b moe which is 15x [i…
-
@alibabagroup
@alibabagroup
on x
🚀 Qwen3.6-27B is now open source! Start building with this dense 27B multimodal model delivering flagship-level agentic coding performance. #AlibabaAI #Qwen
-
@michelnivard
Michel Nivard
on bluesky
If you (like me!) feel uneasy about the societal consequences of big AI firms, their lock on your data, etc, you can now run an LLM locally one one beefy GPU or MacBook Pro, that's just insanely capable (will feel close to Claude like 6-12 months ago for many tasks). qwen.ai/blog…
-
r/LocalLLaMA
r
on reddit
Qwen3.6-27B released!