/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Z.ai launches GLM-5, saying its flagship open-weight model has “best-in-class performance among all open-source models” in reasoning, coding, and agentic tasks

We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks.  Scaling is still one of the most important ways …

Z.ai

Discussion

  • @zai_org @zai_org on x
    Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. [i…
  • @dorialexander Alexander Doria on x
    Looking forward to the model report of the new GLM: likely scaling synthetic environments toward fully emulated work/bureaucratic systems. [image]
  • @zai_org @zai_org on x
    On Vending Bench 2, GLM-5 ranks #1 among open-source models, finishing with a final account balance of $4,432. It approaches Claude Opus 4.5, demonstrating strong long-term planning and resource management. [image]
  • @louszbd Lou on x
    i felt agentic engineering era is coming claude opus 4.6 and gpt-5.3 codex got me thinking coding models have entered a new era. they're literally building systems. looking ahead to 2026, imo LLMs will go beyond generating text, and start executing tasks end to end. our team
  • @eliebakouch Elie on x
    GLM-5 is out, amazing release with very very good benchmark scores even on tasks like @andonlabs vending bench 2 i think one of the most crazy parts of this is that the RL framework that they use is open (based on megatron for training, @sgl_project for inference), it's somewhat …
  • @_simonsmith Simon Smith on x
    The GLM-5 benchmark chart doesn't compare the model to Opus 4.6 or GPT-5.3, but is still impressive and, on the heels of Kimi K2.5, suggests China is very close to the frontier in many (but not all) domains. And with video, looking at Seedance 2, China might be ahead.
  • @arena @arena on x
    A new open-source model has entered the Arena. Come check out @Zai_org's latest GLM-5 in Text and Code. Test out its coding chops in Text and its agentic coding capabilities in Code. Battle with the top frontier models and don't forget to vote - scores coming soon. [image]
  • @mweinbach Max Weinbach on x
    Technical report for GLM 5 is out, it looks really good! Looks nearly as good as Opus 4.5, actually. It's much larger than GLM 4.7 though, at 774B total 40B active https://z.ai/... [image]
  • @zhuokaiz Zhuokai Zhao on x
    This is a HUGE win for developers. Claude Code is excellent, but the $200/mo Max plan can be expensive for daily use. GLM-5 works inside Claude Code, with (arguably) comparable performance at ~1/3 the cost. Setup takes ~1 minute: • Install Claude Code as usual • Run 'npx
  • @scaling01 @scaling01 on x
    GLM-5 beating: - Gemini 3 Pro in 7 out of the 8 benchmarks - GPT-5.2-xhigh in 6 out of 8 benchmarks - Opus 4.5 in 3 out of 8 benchmarks
  • @vercel_dev @vercel_dev on x
    GLM-5 is now on AI Gateway. Better long-range planning, multiple thinking modes, and improved multi-step agent tasks versus previous https://z.ai/ models. Use 𝚖𝚘𝚍𝚎𝚕: ‘𝚣𝚊𝚒/𝚐𝚕𝚖-𝟻’ to get started. https://vercel.com/...
  • @scaling01 @scaling01 on x
    GLM-5 was pre-trained on 28.5T tokens and uses DeepSeek Sparse Attention
  • @kimmonismus @kimmonismus on x
    The recent release of Seedance v2.0 and GLM-5 shows why slowing down is not an option for the US. China is hot on the US's heels, and giving up is not an option. Therefore, the storm will only intensify and accelerate.
  • @zai_org @zai_org on x
    For GLM Coding Plan subscribers: Due to limited compute capacity, we're rolling out GLM-5 to Coding Plan users gradually. - Max plan users: You can enable GLM-5 now by updating the model name to “GLM-5” (e.g. in ~/.claude/settings.json for Claude Code). - Other plan tiers:
  • @lintool Jimmy Lin on x
    Congratulations to @jietang @ZixuanLi_ and the entire @Zai_org team on the GLM 5 release: based on >6K votes, it's the best open-weight model on the @yupp_ai leaderboard (with speed control)!
  • @weswinder Wes Winder on x
    glm 5 looks insane basically opus 4.5 and gpt-5.2 level benchmarks while 10x cheaper than opus 4.5 these open source models are saving our wallets fr [image]
  • @thestalwart Joe Weisenthal on x
    The make of one of China's most advanced coding models is public and has a market cap of less than $18 billion https://www.bloomberg.com/... [image]
  • @iamnitinr Nitin Ranganath on x
    I wish @Zai_org had more compute. The GLM models are so good, but their throughput over the coding plan is pretty frustrating. I'm hoping that changes soon. Things are looking better than ever for open-weight models with the recent Kimi and GLM launches.
  • @altryne Alex Volkov on x
    The evals are out!? GLM 5 from @Zai_org absolutely slams the benches! Comparable to Opus 4.5 while being significantly smaller. Damn [image]
  • @lmsysorg @lmsysorg on x
    🎉 The mysterious Pony Alpha is finally revealed, congrats to @Zai_org on releasing GLM-5! SGLang is ready to support on day-0. 🛠️ 744B params (40B active) model built for complex systems engineering & long-horizon agentic tasks 📚 28.5T tokens pretraining for a stronger [image]
  • @zai_org @zai_org on x
    On our internal evaluation suite CC-Bench-V2, GLM-5 significantly outperforms GLM-4.7 across frontend, backend, and long-horizon tasks, narrowing the gap with Claude Opus 4.5. [image]
  • @zephyr_z9 @zephyr_z9 on x
    Very strong model from GLM A bit behind Opus 4.6, but parity with Opus 4.5 at only 700B parameters [image]
  • @bridgemindai @bridgemindai on x
    GLM 5 just dropped and the pricing is absurd.  $0.80 per million input tokens.  $2.56 per million output tokens.  For context: Claude Opus 4.6: $5/$25 GPT 5.3 Codex: $1.75/$14 GLM-5: $0.80/$2.56 GLM 5 is 6x cheaper than Opus on input and 10x cheaper on output...China isn't just c…
  • @koylanai @koylanai on x
    I've been testing GLM-5 over the last couple of days.  Its reasoning is really good; - decomposes the challenging problem correctly - identifies the right failure modes - arrives at a valid architectural solution GLM-5 also does something interesting where it compresses concepts …
  • @zai_org @zai_org on x
    A new model is now available on https://chat.z.ai/. [image]
  • r/LocalLLaMA r on reddit
    GLM 5 is already on huggingface!
  • r/singularity r on reddit
    GLM-5: From Vibe Coding to Agentic Engineering
  • @minyangtian1 Minyang Tian on x
    🚀 @Zai_org GLM-5 hits 46.2% on SciCode! That's a +1.1% jump over GLM-4.7 (45.1%), continuing their steady rise in research-level scientific coding. Excited to see how far they can go as model quality compounds! [image]
  • @theahmadosman Ahmad on x
    we have opensource Opus 4.5 at home now Zhipu AI cooked with GLM-5 [image]
  • @teortaxestex @teortaxestex on x
    > To be upfront: compute is very tight all Chinese AGI startups rn be like: [image]
  • @carolglms Carol Lin on x
    Introducing GLM-5 on Google Cloud Vertex AI. From experimentation to enterprise deployment — GLM-5 + Vertex AI gives you the scale, reliability, and global reach to build what's next. Start building today. https://lnkd.in/... #GLM5 #GoogleCloud #AI
  • @zixuanli_ Zixuan Li on x
    We've noticed that several AI products reference https://glm5.net/ when summarizing GLM-5 information. This website is not affiliated with https://z.ai/ and contains inaccurate information. No glm5-related domains are held by https://z.ai/ except
  • @lukaspet Lukas Petersson on x
    After hours of reading GLM-5 traces: an incredibly effective model, but far less situationally aware. Achieves goals via aggressive tactics but doesn't reason about its situation or leverage experience. This is scary. This is how you get a paperclip maximizer. [image]
  • @andonlabs @andonlabs on x
    GLM-5 takes 4th place on Vending-Bench 2. Above Claude Sonnet 4.5, the state-of-the-art model less than 6 months ago. China seems to be 6 months behind the West. By June they will be ahead if the trends continue. More in this thread on why we don't think this will happen. [image]
  • @jietang @jietang on x
    pony alpha -> GLM-5 is coming with AA=50, scoring No. 1 among all open-weights models. The key is coding and agentic abilities to complete long horizon tasks... [image]
  • @mervenoyann Merve on x
    GLM-5 is out on @huggingface 🔥 > A40B/744B, trained on more tokens (28.5T) > outperforms/on par with closed sota > allows commercial use (MIT licensed) 💗 use with vLLM/SGLang locally or through HF Inference Providers thanks to @novita_labs and @Zai_org 📦 [image]
  • @teksedge David Hendrickson on x
    GLM5 is hitting the streets, and I think Kimi K2.5 has some competition. Check out the latest GLM-5 benchmark. More benchmarks to come. Open Source is the way. [image]
  • @zephyr_z9 @zephyr_z9 on x
    GLM 4.7 was 32B active, while GLM 5 is 40B active Inference is also cheaper due to DSA Meanwhile, they have increased the price substantially to increase gross margins As a proud Zhipu shareholder since IPO, I approve [image]
  • @theo @theo on x
    GLM-5 is a killer model. Genuinely super impressed. Live in 20ish to talk about it.
  • @vince_chow1 Vincent Chow on x
    New: Zhipu launched new flagship GLM-5 https://www.scmp.com/... few things jumped out to me: 1. Use of DeepSeek Sparse Attention mechanism, reaffirming DeepSeek's unparalleled contributions to China's AI industry by making its fundamental research open to all 2. Notable
  • @artificialanlys @artificialanlys on x
    GLM-5 is the new leading open weights model! GLM-5 leads the Artificial Analysis Intelligence Index amongst open weights models and makes large gains over GLM-4.7 in GDPval-AA, our agentic benchmark focused on economically valuable work tasks GLM-5 is @Zai_org's first new [image]
  • @zai_org @zai_org on x
    With the launch of GLM-5, https://chat.z.ai/ introduces Agent Mode. - Agent Mode: Automatically breaks down tasks, orchestrates tools, drives execution, and delivers ready-to-use files. - Data Insights & Smart Writing: Upload data for instant visualizations. Go from outline [vide…
  • @chrmanning Christopher Manning on x
    🧐 These look like honest benchmark results - where you do well on some things and are somewhat behind on others....
  • @rasbt Sebastian Raschka on x
    The weights are out! Here's the GLM-5 architecture comparison. GLM-5 is: - bigger than its predecessor (mainly more experts) but has rel. similar active parameter counts - uses multi-head latent attention - uses DeepSeek Sparse Attention [image]
  • @artificialanlys @artificialanlys on x
    GLM-5 demonstrates improvement in AA-Omniscience Index, driven by lower hallucination. This means the model is abstaining more from answering questions it does not know [image]
  • @artificialanlys @artificialanlys on x
    GLM-5 uses fewer output tokens than GLM-4.7 to run the Artificial Analysis Intelligence Index [image]
  • @ollama @ollama on x
    GLM 5 on Ollama's cloud has increased capacity now and a higher speed! Full sized model to use with your tools! ollama pull glm-5:cloud Claude: ollama launch claude —model glm-5:cloud OpenClaw ollama launch openclaw —model glm-5:cloud *Pelican made by GLM-5 on Ollama [image]
  • @ml_angelopoulos Anastasios Nikolas Angelopoulos on x
    As expected, GLM-5 by @Zai_org is the top open model in the world. It still trails substantially behind proprietary models, at #11. It is a GPT-5.1-high or grok-4.1 quality model. These models were released last November. Thus open models are about 3 months behind. Not bad! [imag…
  • @theo @theo on x
    Complete list of models currently worth using for code: Opus 4.6 Codex 5.3 GLM-5
  • @zai_org @zai_org on x
    GLM-5, Gameboy and Long-Task Era → 700+ tool calls, 800+ context handoffs, and a single agent running for over 24 hours. https://blog.e01.ai/... [video]
  • @arena @arena on x
    How does the #1 open Text Arena model hold up in agentic coding tasks? We tested GLM-5 in Code Arena with head-to-head SVG prompts vs. top frontier AI models. What do you think? Scores for @Zai_org 's GLM-5 in Code Arena coming soon. Test out GLM-5 for yourself and get voting. [v…
  • @theo @theo on x
    GLM-5 is an incredible model. It's the first open weight model I can actually recommend for coding. [video]
  • @artificialanlys @artificialanlys on x
    GLM-5 is on the Pareto curve of the Intelligence vs. Cost to Run the Intelligence Index chart driven by lower per token pricing compared to proprietary peers (e.g. Claude Opus, Google Gemini and OpenAI GPT-5.2) - GLM-5 cost ~$547 (based on the median per token price of [image]
  • @unslothai @unslothai on x
    You can now run GLM-5 locally!🔥 GLM-5 is a new open SOTA agentic coding & chat LLM with 200K context. We shrank the 744B model from 1.65TB to 241GB (-85%) via Dynamic 2-bit. Runs on a 256GB Mac or RAM/VRAM setups. Guide: https://unsloth.ai/... GGUF: https://huggingface.co/... [im…
  • @scaling01 @scaling01 on x
    Average Throughput of GLM-5 on Openrouter is 14 tps [image]
  • @ankrgyl Ankur Goyal on x
    GLM5 is an impressive model. It's the first OSS model to perform competitively well to a leading commercial model (claude sonnet 4.5) on our bash eval. [image]