Z.ai launches GLM-5, saying its flagship open-weight model has “best-in-class performance among all open-source models” in reasoning, coding, and agentic tasks
We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks. Scaling is still one of the most important ways …
Z.ai
Related Coverage
- z.ai's open source GLM-5 achieves record low hallucination rate and leverages new RL ‘slime’ technique VentureBeat · Carl Franzen
- GLM5, Gameboy and Long-Task Era blog.e01.ai
- GLM-5: From Vibe Coding to Agentic Engineering (via) This is a huge new MIT-licensed model … Simon Willison's Weblog · Simon Willison
- GPT-5.3-Codex and Claude Opus 4.6: More System Card Shenanigans Artificial Ignorance · Charlie Guo
- Keeping an eye on global AI progress Cautious Optimism · Alex Wilhelm
- Chinese AI startup Zhipu releases new flagship model GLM-5 Reuters
- China's AI startup Zhipu releases new flagship model GLM-5 Reuters
- GLM-5: From Vibe Coding to Agentic Engineering Hacker News
- China's Zhipu Jolts AI Race as ‘Scare Trade’ Grips US | The China Show 2/12/2026 Bloomberg
- ByteDance, Zhipu Jolt China's AI Race Ahead of New Year Livemint
- Z.ai unveils GLM-5, advances AI agents and China chip compatibility DigiTimes
- China's Zhipu AI launches new major model GLM-5 in challenge to its rivals South China Morning Post · Vincent Chow
- Shares jump in Chinese AI start-up Zhipu after GLM-5 launch Silicon Republic · Suhasini Srinivasaragavan
- Wall Street Gets On Board With the Hype Over China AI Stocks Bloomberg · Charlotte Yang
- ByteDance, Zhipu Jolt China's AI Race Ahead of New Year Bloomberg
- xAI's next phase unleashed The Rundown AI
- [AINews] Z.ai GLM-5: New SOTA Open Weights LLM Latent.Space
- Chinese AI Stocks Surge After Zhipu's GLM-5 AI Model Launch Watcher Guru · Paigambar Mohan Raj
- Zhipu AI Launches New Model Better at Coding, Learning Caixin Global · Liu Peilin
- Zhipu AI Releases GLM-5: 744B Model Rivals Claude Opus WinBuzzer · Markus Kasanmascheff
- GLM-5 Costs 86% Less Than Claude Opus. The Safety Gap Might Cost More. Implicator.ai · Harkaram Grewal
- GLM-5: From Vibe Coding to Agentic Engineering Lobsters
- Hong Kong-listed Chinese AI company stocks soared on February 12: Z.ai surged nearly 30% after releasing GLM-5 and MiniMax jumped 13.7% after launching M2.5 CNBC
Discussion
-
@zai_org
@zai_org
on x
Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. [i…
-
@dorialexander
Alexander Doria
on x
Looking forward to the model report of the new GLM: likely scaling synthetic environments toward fully emulated work/bureaucratic systems. [image]
-
@zai_org
@zai_org
on x
On Vending Bench 2, GLM-5 ranks #1 among open-source models, finishing with a final account balance of $4,432. It approaches Claude Opus 4.5, demonstrating strong long-term planning and resource management. [image]
-
@louszbd
Lou
on x
i felt agentic engineering era is coming claude opus 4.6 and gpt-5.3 codex got me thinking coding models have entered a new era. they're literally building systems. looking ahead to 2026, imo LLMs will go beyond generating text, and start executing tasks end to end. our team
-
@eliebakouch
Elie
on x
GLM-5 is out, amazing release with very very good benchmark scores even on tasks like @andonlabs vending bench 2 i think one of the most crazy parts of this is that the RL framework that they use is open (based on megatron for training, @sgl_project for inference), it's somewhat …
-
@_simonsmith
Simon Smith
on x
The GLM-5 benchmark chart doesn't compare the model to Opus 4.6 or GPT-5.3, but is still impressive and, on the heels of Kimi K2.5, suggests China is very close to the frontier in many (but not all) domains. And with video, looking at Seedance 2, China might be ahead.
-
@arena
@arena
on x
A new open-source model has entered the Arena. Come check out @Zai_org's latest GLM-5 in Text and Code. Test out its coding chops in Text and its agentic coding capabilities in Code. Battle with the top frontier models and don't forget to vote - scores coming soon. [image]
-
@mweinbach
Max Weinbach
on x
Technical report for GLM 5 is out, it looks really good! Looks nearly as good as Opus 4.5, actually. It's much larger than GLM 4.7 though, at 774B total 40B active https://z.ai/... [image]
-
@zhuokaiz
Zhuokai Zhao
on x
This is a HUGE win for developers. Claude Code is excellent, but the $200/mo Max plan can be expensive for daily use. GLM-5 works inside Claude Code, with (arguably) comparable performance at ~1/3 the cost. Setup takes ~1 minute: • Install Claude Code as usual • Run 'npx
-
@scaling01
@scaling01
on x
GLM-5 beating: - Gemini 3 Pro in 7 out of the 8 benchmarks - GPT-5.2-xhigh in 6 out of 8 benchmarks - Opus 4.5 in 3 out of 8 benchmarks
-
@vercel_dev
@vercel_dev
on x
GLM-5 is now on AI Gateway. Better long-range planning, multiple thinking modes, and improved multi-step agent tasks versus previous https://z.ai/ models. Use 𝚖𝚘𝚍𝚎𝚕: ‘𝚣𝚊𝚒/𝚐𝚕𝚖-𝟻’ to get started. https://vercel.com/...
-
@scaling01
@scaling01
on x
GLM-5 was pre-trained on 28.5T tokens and uses DeepSeek Sparse Attention
-
@kimmonismus
@kimmonismus
on x
The recent release of Seedance v2.0 and GLM-5 shows why slowing down is not an option for the US. China is hot on the US's heels, and giving up is not an option. Therefore, the storm will only intensify and accelerate.
-
@zai_org
@zai_org
on x
For GLM Coding Plan subscribers: Due to limited compute capacity, we're rolling out GLM-5 to Coding Plan users gradually. - Max plan users: You can enable GLM-5 now by updating the model name to “GLM-5” (e.g. in ~/.claude/settings.json for Claude Code). - Other plan tiers:
-
@lintool
Jimmy Lin
on x
Congratulations to @jietang @ZixuanLi_ and the entire @Zai_org team on the GLM 5 release: based on >6K votes, it's the best open-weight model on the @yupp_ai leaderboard (with speed control)!
-
@weswinder
Wes Winder
on x
glm 5 looks insane basically opus 4.5 and gpt-5.2 level benchmarks while 10x cheaper than opus 4.5 these open source models are saving our wallets fr [image]
-
@thestalwart
Joe Weisenthal
on x
The make of one of China's most advanced coding models is public and has a market cap of less than $18 billion https://www.bloomberg.com/... [image]
-
@iamnitinr
Nitin Ranganath
on x
I wish @Zai_org had more compute. The GLM models are so good, but their throughput over the coding plan is pretty frustrating. I'm hoping that changes soon. Things are looking better than ever for open-weight models with the recent Kimi and GLM launches.
-
@altryne
Alex Volkov
on x
The evals are out!? GLM 5 from @Zai_org absolutely slams the benches! Comparable to Opus 4.5 while being significantly smaller. Damn [image]
-
@lmsysorg
@lmsysorg
on x
🎉 The mysterious Pony Alpha is finally revealed, congrats to @Zai_org on releasing GLM-5! SGLang is ready to support on day-0. 🛠️ 744B params (40B active) model built for complex systems engineering & long-horizon agentic tasks 📚 28.5T tokens pretraining for a stronger [image]
-
@zai_org
@zai_org
on x
On our internal evaluation suite CC-Bench-V2, GLM-5 significantly outperforms GLM-4.7 across frontend, backend, and long-horizon tasks, narrowing the gap with Claude Opus 4.5. [image]
-
@zephyr_z9
@zephyr_z9
on x
Very strong model from GLM A bit behind Opus 4.6, but parity with Opus 4.5 at only 700B parameters [image]
-
@bridgemindai
@bridgemindai
on x
GLM 5 just dropped and the pricing is absurd. $0.80 per million input tokens. $2.56 per million output tokens. For context: Claude Opus 4.6: $5/$25 GPT 5.3 Codex: $1.75/$14 GLM-5: $0.80/$2.56 GLM 5 is 6x cheaper than Opus on input and 10x cheaper on output...China isn't just c…
-
@koylanai
@koylanai
on x
I've been testing GLM-5 over the last couple of days. Its reasoning is really good; - decomposes the challenging problem correctly - identifies the right failure modes - arrives at a valid architectural solution GLM-5 also does something interesting where it compresses concepts …
-
@zai_org
@zai_org
on x
A new model is now available on https://chat.z.ai/. [image]
-
r/LocalLLaMA
r
on reddit
GLM 5 is already on huggingface!
-
r/singularity
r
on reddit
GLM-5: From Vibe Coding to Agentic Engineering
-
@minyangtian1
Minyang Tian
on x
🚀 @Zai_org GLM-5 hits 46.2% on SciCode! That's a +1.1% jump over GLM-4.7 (45.1%), continuing their steady rise in research-level scientific coding. Excited to see how far they can go as model quality compounds! [image]
-
@theahmadosman
Ahmad
on x
we have opensource Opus 4.5 at home now Zhipu AI cooked with GLM-5 [image]
-
@teortaxestex
@teortaxestex
on x
> To be upfront: compute is very tight all Chinese AGI startups rn be like: [image]
-
@carolglms
Carol Lin
on x
Introducing GLM-5 on Google Cloud Vertex AI. From experimentation to enterprise deployment — GLM-5 + Vertex AI gives you the scale, reliability, and global reach to build what's next. Start building today. https://lnkd.in/... #GLM5 #GoogleCloud #AI
-
@zixuanli_
Zixuan Li
on x
We've noticed that several AI products reference https://glm5.net/ when summarizing GLM-5 information. This website is not affiliated with https://z.ai/ and contains inaccurate information. No glm5-related domains are held by https://z.ai/ except
-
@lukaspet
Lukas Petersson
on x
After hours of reading GLM-5 traces: an incredibly effective model, but far less situationally aware. Achieves goals via aggressive tactics but doesn't reason about its situation or leverage experience. This is scary. This is how you get a paperclip maximizer. [image]
-
@andonlabs
@andonlabs
on x
GLM-5 takes 4th place on Vending-Bench 2. Above Claude Sonnet 4.5, the state-of-the-art model less than 6 months ago. China seems to be 6 months behind the West. By June they will be ahead if the trends continue. More in this thread on why we don't think this will happen. [image]
-
@jietang
@jietang
on x
pony alpha -> GLM-5 is coming with AA=50, scoring No. 1 among all open-weights models. The key is coding and agentic abilities to complete long horizon tasks... [image]
-
@mervenoyann
Merve
on x
GLM-5 is out on @huggingface 🔥 > A40B/744B, trained on more tokens (28.5T) > outperforms/on par with closed sota > allows commercial use (MIT licensed) 💗 use with vLLM/SGLang locally or through HF Inference Providers thanks to @novita_labs and @Zai_org 📦 [image]
-
@teksedge
David Hendrickson
on x
GLM5 is hitting the streets, and I think Kimi K2.5 has some competition. Check out the latest GLM-5 benchmark. More benchmarks to come. Open Source is the way. [image]
-
@zephyr_z9
@zephyr_z9
on x
GLM 4.7 was 32B active, while GLM 5 is 40B active Inference is also cheaper due to DSA Meanwhile, they have increased the price substantially to increase gross margins As a proud Zhipu shareholder since IPO, I approve [image]
-
@theo
@theo
on x
GLM-5 is a killer model. Genuinely super impressed. Live in 20ish to talk about it.
-
@vince_chow1
Vincent Chow
on x
New: Zhipu launched new flagship GLM-5 https://www.scmp.com/... few things jumped out to me: 1. Use of DeepSeek Sparse Attention mechanism, reaffirming DeepSeek's unparalleled contributions to China's AI industry by making its fundamental research open to all 2. Notable
-
@artificialanlys
@artificialanlys
on x
GLM-5 is the new leading open weights model! GLM-5 leads the Artificial Analysis Intelligence Index amongst open weights models and makes large gains over GLM-4.7 in GDPval-AA, our agentic benchmark focused on economically valuable work tasks GLM-5 is @Zai_org's first new [image]
-
@zai_org
@zai_org
on x
With the launch of GLM-5, https://chat.z.ai/ introduces Agent Mode. - Agent Mode: Automatically breaks down tasks, orchestrates tools, drives execution, and delivers ready-to-use files. - Data Insights & Smart Writing: Upload data for instant visualizations. Go from outline [vide…
-
@chrmanning
Christopher Manning
on x
🧐 These look like honest benchmark results - where you do well on some things and are somewhat behind on others....
-
@rasbt
Sebastian Raschka
on x
The weights are out! Here's the GLM-5 architecture comparison. GLM-5 is: - bigger than its predecessor (mainly more experts) but has rel. similar active parameter counts - uses multi-head latent attention - uses DeepSeek Sparse Attention [image]
-
@artificialanlys
@artificialanlys
on x
GLM-5 demonstrates improvement in AA-Omniscience Index, driven by lower hallucination. This means the model is abstaining more from answering questions it does not know [image]
-
@artificialanlys
@artificialanlys
on x
GLM-5 uses fewer output tokens than GLM-4.7 to run the Artificial Analysis Intelligence Index [image]
-
@ollama
@ollama
on x
GLM 5 on Ollama's cloud has increased capacity now and a higher speed! Full sized model to use with your tools! ollama pull glm-5:cloud Claude: ollama launch claude —model glm-5:cloud OpenClaw ollama launch openclaw —model glm-5:cloud *Pelican made by GLM-5 on Ollama [image]
-
@ml_angelopoulos
Anastasios Nikolas Angelopoulos
on x
As expected, GLM-5 by @Zai_org is the top open model in the world. It still trails substantially behind proprietary models, at #11. It is a GPT-5.1-high or grok-4.1 quality model. These models were released last November. Thus open models are about 3 months behind. Not bad! [imag…
-
@theo
@theo
on x
Complete list of models currently worth using for code: Opus 4.6 Codex 5.3 GLM-5
-
@zai_org
@zai_org
on x
GLM-5, Gameboy and Long-Task Era → 700+ tool calls, 800+ context handoffs, and a single agent running for over 24 hours. https://blog.e01.ai/... [video]
-
@arena
@arena
on x
How does the #1 open Text Arena model hold up in agentic coding tasks? We tested GLM-5 in Code Arena with head-to-head SVG prompts vs. top frontier AI models. What do you think? Scores for @Zai_org 's GLM-5 in Code Arena coming soon. Test out GLM-5 for yourself and get voting. [v…
-
@theo
@theo
on x
GLM-5 is an incredible model. It's the first open weight model I can actually recommend for coding. [video]
-
@artificialanlys
@artificialanlys
on x
GLM-5 is on the Pareto curve of the Intelligence vs. Cost to Run the Intelligence Index chart driven by lower per token pricing compared to proprietary peers (e.g. Claude Opus, Google Gemini and OpenAI GPT-5.2) - GLM-5 cost ~$547 (based on the median per token price of [image]
-
@unslothai
@unslothai
on x
You can now run GLM-5 locally!🔥 GLM-5 is a new open SOTA agentic coding & chat LLM with 200K context. We shrank the 744B model from 1.65TB to 241GB (-85%) via Dynamic 2-bit. Runs on a 256GB Mac or RAM/VRAM setups. Guide: https://unsloth.ai/... GGUF: https://huggingface.co/... [im…
-
@scaling01
@scaling01
on x
Average Throughput of GLM-5 on Openrouter is 14 tps [image]
-
@ankrgyl
Ankur Goyal
on x
GLM5 is an impressive model. It's the first OSS model to perform competitively well to a leading commercial model (claude sonnet 4.5) on our bash eval. [image]