Alibaba debuts Qwen3.5, a 397B-parameter open-weight multimodal AI model that it says is 60% cheaper to use and 8x better at large workloads than Qwen3
Reuters Eduardo Baptista
Related Coverage
- Qwen3.5-397B-A17B — Over recent months, we have intensified our focus on developing foundation models … Qwen on Hugging Face
- Qwen3.5: Towards Native Multimodal Agents Qwen
- Qwen3.5: Towards Native Multimodal Agents. Alibaba's Qwen just released the first two models … Simon Willison's Weblog · Simon Willison
- Alibaba Launches New LLM as China's AI Battle Heats Up The Information · Juro Osawa
- Alibaba unveils Qwen3.5 as China's chatbot race shifts to AI agents CNBC · Dylan Butts
- Alibaba unveils Qwen 3.5: a new frontier in multimodal AI agents DigiTimes
- Alibaba releases multimodal Qwen3.5 mixture of experts model SiliconANGLE · Maria Deutscher
- 🇨🇳 Alibaba unveils new Qwen3.5 model for ‘agentic AI era’, Qwen3.5-397B-A17B. Apache 2.0 license Rohan's Bytes · Rohan Paul
- Alibaba Unveils Qwen 3.5 AI Model with Agentic Capabilities WinBuzzer · Markus Kasanmascheff
- Alibaba's free Qwen3.5 signals that China's open-weight model race is far from slowing down The Decoder · Jonathan Kemper
- Alibaba Qwen Team Releases Qwen3.5-397B MoE Model with 17B Active Parameters and 1M Token Context for AI agents MarkTechPost · Asif Razzaq
- [AINews] Qwen3.5-397B-A17B: the smallest Open-Opus class, very efficient model Latent.Space
- Alibaba unveils Qwen-3.5, sharpening global race to spread AI models South China Morning Post · Vincent Chow
- Qwen3.5: Towards Native Multimodal Agents Hacker News
Discussion
-
@justinlin610
Junyang Lin
on x
A clarification of Qwen3.5 Plus and 397B: 1. for opensource, we follow the tradition to make parameters apparent so we use the name with the number of total parameters and active params. 2. Qwen3-Plus is a hosted API version of 397B. As the model natively supports 256K tokens, …
-
@alibaba_qwen
@alibaba_qwen
on x
🚀 Qwen3.5-397B-A17B is here: The first open-weight model in the Qwen3.5 series. 🖼️Native multimodal. Trained for real-world agents. ✨Powered by hybrid linear attention + sparse MoE and large-scale RL environment scaling. ⚡8.6x-19.0x decoding throughput vs Qwen3-Max 🌍201 langu…
-
@bnjmn_marie
Benjamin Marie
on x
Let's do the KV cache math for Qwen3.5: - KV heads: 2 - Head dimension: 256 - gated attention layers: 15 - bytes per element (BF16): 2 2 x 256 x 15 x 2 = 15 360 This is the same for K and V. So, we multiply by 2: 30 720 bytes Roughly 31 kb per token of context. Meaning at max
-
@andonlabs
@andonlabs
on x
Qwen 3.5 goes bankrupt on Vending-Bench 2 [image]
-
@sundeep
Sunny Madra
on x
Chinese New Year gift: [image]
-
@awnihannun
Awni Hannun
on x
Qwen3.5 runs quite well in mlx-lm. Awesome that we have a frontier-level hybrid model. The context gets longer but the inference speed and memory use barely change. Here's the Q4 generating a space invaders game on an M3 Ultra. Generated 4,120 tokens at 37.6 tok/s. [video]
-
@nvidiaaidev
@nvidiaaidev
on x
🎊 Kudos to the teams at @Alibaba_Qwen on the launch of Qwen 3.5 with Qwen3.5-397B-A17B. 🙌 Developers can start building today for free: https://build.nvidia.com/... Or download and customize it with NVIDIA NeMo: https://github.com/... [video]
-
@openrouterai
@openrouterai
on x
The new @Alibaba_Qwen Qwen3.5-397B-A17B is live on OpenRouter now! This multimodal model uses a hybrid architecture combining linear attention with sparse MoE for higher inference efficiency. Available as both the open weights version and Qwen3.5 Plus with extended 1M context.
-
@tphuang
@tphuang
on x
Qwen 3.5 has been released. 397B parameters including 17B active ones. This is the flagship open sourced version. Looks to be reasonably better than Qwen3 235B. I've found larger Qwen models to not be all that great. Given the recent Minimax & GLM releases, Qwen has to also have …
-
@lmsysorg
@lmsysorg
on x
🎉 Meet Qwen3.5-397B-A17B from @Alibaba_Qwen, 397B total params (17B active), built for real-world multimodal intelligence — day-0 support is now live in SGLang! 👁️ Unified vision-language foundation (early fusion): stronger reasoning, coding & agents ⚡ Gated DeltaNet + sparse [im…
-
@scaling01
@scaling01
on x
Ouch, the pricing on Alibaba just hurts. You can get the larger Kimi-K2.5 and GLM-5 for less [image]
-
@scaling01
@scaling01
on x
The new chonky Qwen 3.5 looks pretty solid, beating their own Qwen3-Max model everywhere and is much better at vision benchmarks than Qwen3-235B-A22B-VL Now what I sadly haven't seen is anything on reasoning efficiency. [image]
-
@unslothai
@unslothai
on x
You can now run Qwen3.5 locally! 💜 Qwen3.5-397B-A17B is an open MoE vision reasoning LLM for agentic coding & chat. It performs on par with Gemini 3 Pro, Claude Opus 4.5 & GPT-5.2. Run 4-bit on 256GB Mac / RAM. Guide: https://unsloth.ai/... GGUF: https://huggingface.co/... [image…
-
@theahmadosman
Ahmad
on x
Qwen3.5-397B-A17B > half the size of Kimi K2.5 > goes head-to-head against it the Qwen bros cooked
-
@shariqriazz
Shariq Riaz
on x
Qwen 3.5 Plus live on Website, Claims 256k Context and Multimodality and Qwen Coder as well on website 👀 [image]
-
@justinlin610
Junyang Lin
on x
Qwen3.5 is Live! Today we openweight the first model, Qwen3-397B-A17B, which is a native multimodal model supporting both thinking and non-thinking modes. We have strengthened its coding and agentic capabilities to foster productivity for developers and enterprises. Hope you
-
@teortaxestex
@teortaxestex
on x
So speaking of benchmarks, what can be said of the new open Qwen? First, it completely destroys Qwen3-VL-235B ofc, but more surprisingly it outscores Qwen3-Max-thinking. All the while it's the same model as “Plus”. Plus just has 1M context and some more bells and whistles. [image…
-
@theahmadosman
Ahmad
on x
BREAKING Qwen3.5-397B-A17B weights have been uploaded to Hugging Face > 397B parameters in total > 17B active parameters per token > 256K context window > expendable to 1M token more soon once i am done running my evals [image]
-
@suvsh
@suvsh
on x
https://huggingface.co/... its out.
-
@kimmonismus
@kimmonismus
on x
Here we go! First release of the day: Qwen 3.5 Plus and Qwen 3.5-397B-A17B are now live on their site! Really excited for their performance! [image]
-
r/LocalLLaMA
r
on reddit
Qwen3.5-397B-A17B is out!!