/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Z.ai launches GLM-5, its flagship open-weight model, saying it has best-in-class performance among open-source models in reasoning, coding, and agentic tasks

We are launching GLM-5, targeting complex systems engineering and long-horizon agentic tasks.  Scaling is still one of the most important ways …

Z.ai

Discussion

  • @zai_org @zai_org on x
    Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. [i…
  • @zephyr_z9 @zephyr_z9 on x
    Very strong model from GLM A bit behind Opus 4.6, but parity with Opus 4.5 at only 700B parameters [image]
  • r/LocalLLaMA r on reddit
    GLM-5 Officially Released
  • r/singularity r on reddit
    GLM-5: From Vibe Coding to Agentic Engineering
  • @mweinbach Max Weinbach on x
    Technical report for GLM 5 is out, it looks really good! Looks nearly as good as Opus 4.5, actually. It's much larger than GLM 4.7 though, at 774B total 40B active https://z.ai/... [image]
  • @altryne Alex Volkov on x
    The evals are out!? GLM 5 from @Zai_org absolutely slams the benches! Comparable to Opus 4.5 while being significantly smaller. Damn [image]
  • r/LocalLLaMA r on reddit
    GLM 5 is already on huggingface!
  • @kimmonismus @kimmonismus on x
    The recent release of Seedance v2.0 and GLM-5 shows why slowing down is not an option for the US. China is hot on the US's heels, and giving up is not an option. Therefore, the storm will only intensify and accelerate.
  • @vercel_dev @vercel_dev on x
    GLM-5 is now on AI Gateway. Better long-range planning, multiple thinking modes, and improved multi-step agent tasks versus previous https://z.ai/ models. Use 𝚖𝚘𝚍𝚎𝚕: ‘𝚣𝚊𝚒/𝚐𝚕𝚖-𝟻’ to get started. https://vercel.com/...
  • @weswinder Wes Winder on x
    glm 5 looks insane basically opus 4.5 and gpt-5.2 level benchmarks while 10x cheaper than opus 4.5 these open source models are saving our wallets fr [image]
  • @bridgemindai @bridgemindai on x
    GLM 5 just dropped and the pricing is absurd. $0.80 per million input tokens. $2.56 per million output tokens. For context: Claude Opus 4.6: $5/$25 GPT 5.3 Codex: $1.75/$14 GLM-5: $0.80/$2.56 GLM 5 is 6x cheaper than Opus on input and 10x cheaper on output. 200K context [image]
  • @koylanai @koylanai on x
    I've been testing GLM-5 over the last couple of days. Its reasoning is really good; - decomposes the challenging problem correctly - identifies the right failure modes - arrives at a valid architectural solution GLM-5 also does something interesting where it compresses concepts […
  • @scaling01 @scaling01 on x
    GLM-5 beating: - Gemini 3 Pro in 7 out of the 8 benchmarks - GPT-5.2-xhigh in 6 out of 8 benchmarks - Opus 4.5 in 3 out of 8 benchmarks
  • @scaling01 @scaling01 on x
    GLM-5 was pre-trained on 28.5T tokens and uses DeepSeek Sparse Attention
  • @zai_org @zai_org on x
    A new model is now available on https://chat.z.ai/. [image]
  • @zhuokaiz Zhuokai Zhao on x
    This is a HUGE win for developers. Claude Code is excellent, but the $200/mo Max plan can be expensive for daily use. GLM-5 works inside Claude Code, with (arguably) comparable performance at ~1/3 the cost. Setup takes ~1 minute: • Install Claude Code as usual • Run 'npx
  • @iamnitinr Nitin Ranganath on x
    I wish @Zai_org had more compute. The GLM models are so good, but their throughput over the coding plan is pretty frustrating. I'm hoping that changes soon. Things are looking better than ever for open-weight models with the recent Kimi and GLM launches.
  • @dorialexander Alexander Doria on x
    Looking forward to the model report of the new GLM: likely scaling synthetic environments toward fully emulated work/bureaucratic systems. [image]
  • @zai_org @zai_org on x
    On Vending Bench 2, GLM-5 ranks #1 among open-source models, finishing with a final account balance of $4,432. It approaches Claude Opus 4.5, demonstrating strong long-term planning and resource management. [image]
  • @louszbd Lou on x
    i felt agentic engineering era is coming claude opus 4.6 and gpt-5.3 codex got me thinking coding models have entered a new era. they're literally building systems. looking ahead to 2026, imo LLMs will go beyond generating text, and start executing tasks end to end. our team
  • @eliebakouch Elie on x
    GLM-5 is out, amazing release with very very good benchmark scores even on tasks like @andonlabs vending bench 2 i think one of the most crazy parts of this is that the RL framework that they use is open (based on megatron for training, @sgl_project for inference), it's somewhat …
  • @arena @arena on x
    A new open-source model has entered the Arena. Come check out @Zai_org's latest GLM-5 in Text and Code. Test out its coding chops in Text and its agentic coding capabilities in Code. Battle with the top frontier models and don't forget to vote - scores coming soon. [image]
  • @zai_org @zai_org on x
    For GLM Coding Plan subscribers: Due to limited compute capacity, we're rolling out GLM-5 to Coding Plan users gradually. - Max plan users: You can enable GLM-5 now by updating the model name to “GLM-5” (e.g. in ~/.claude/settings.json for Claude Code). - Other plan tiers:
  • @lintool Jimmy Lin on x
    Congratulations to @jietang @ZixuanLi_ and the entire @Zai_org team on the GLM 5 release: based on >6K votes, it's the best open-weight model on the @yupp_ai leaderboard (with speed control)!
  • @lmsysorg @lmsysorg on x
    🎉 The mysterious Pony Alpha is finally revealed, congrats to @Zai_org on releasing GLM-5! SGLang is ready to support on day-0. 🛠️ 744B params (40B active) model built for complex systems engineering & long-horizon agentic tasks 📚 28.5T tokens pretraining for a stronger [image]
  • @zai_org @zai_org on x
    On our internal evaluation suite CC-Bench-V2, GLM-5 significantly outperforms GLM-4.7 across frontend, backend, and long-horizon tasks, narrowing the gap with Claude Opus 4.5. [image]