/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Z.ai releases GLM-5.1, a 754B-parameter Mixture-of-Experts model that it says outperforms GPT-5.4 and Opus 4.6 on SWE-Bench Pro, available under an MIT license

Is China picking back up the open source AI baton?  —  Z.ai, also known as Zhupai AI, a Chinese AI startup best known for its powerful …

VentureBeat Carl Franzen

Discussion

  • @louszbd Lou on x
    we open-sourced glm-5.1 agents could do about 20 steps by the end of last year. glm-5.1 can do 1,700 rn. autonomous work time may be the most important curve after scaling laws. glm-5.1 will be the first point on that curve that the open-source community can verify with their own
  • @eliebakouch Elie on x
    GLM-5.1 sota on SWE Bench Pro 😮 [image]
  • @clementdelangue Clem on x
    The best performing model on SWE-Bench Pro is open-source on @huggingface! Welcome GLM 5.1! https://huggingface.co/... [image]
  • @ollama @ollama on x
    GLM-5.1 is here! Try it on OpenClaw🦞🦞🦞 ollama launch openclaw —model glm-5.1:cloud Claude Code ollama launch claude —model glm-5.1:cloud Chat with the model ollama run glm-5.1:cloud
  • @zai_org @zai_org on x
    SOTA on SWE-Bench Pro (58.4): GLM-5.1 delivers significant leaps in coding and agentic performance. [image]
  • @zai_org @zai_org on x
    Building a Linux Desktop from Scratch Using a self-review loop, GLM-5.1 spent 8 hours autonomously refining features, styling, and interactions to build a functional desktop environment. [video]
  • @yuchenj_uw Yuchen Jin on x
    Wow, GLM-5.1 beat Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on SWE-Bench Pro (58.4 vs 57.3 / 57.7 / 54.2) as an open-weight MIT-licensed model! The “open-source AI vs closed-source AI” gap is still ~6 months. [image]
  • @zai_org @zai_org on x
    Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations. [ima…
  • @zai_org @zai_org on x
    Vector-DB-Bench: 6x Performance Boost In high-performance database optimization, GLM-5.1 reached 21.5k QPS over 600+ iterations and 6,000+ tool calls. This is 6x the performance of a standard 50-turn session. [image]
  • @arena @arena on x
    A new open model has entered the Arena! GLM-5.1 by @Zai_org is now ready for your prompts in the Text and Code Arena. Come vote and let's see how it stacks up! [image]
  • @kimmonismus @kimmonismus on x
    Another big release: GLM-5.1! China is on fire! significant increase in evals compared to GLM-5.0 tl;dr GLM-5.1 is the new open-source agentic coding model that significantly outperforms its predecessor by sustaining long-horizon problem-solving over hundreds of iterations, [imag…
  • @deryatr_ Derya Unutmaz on x
    This is crazy! https://z.ai/ has caught up with the SOTA models in coding with its GLM-5.1 open-weight model. This is a very big deal given that billions of $ are now spent on coding tokens! Congratulations to @Zai_org team, major achievement! [image]
  • @business @business on x
    Zhipu raised the cost of access to its most advanced AI model by at least 8%, joining other leading Chinese AI players in trying to profit off years of research and computing investments https://www.bloomberg.com/...
  • r/LocalLLaMA r on reddit
    GLM-5.1
  • @simonw Simon Willison on x
    754B parameters, 1.51TB on Hugging Face
  • @iterintellectus Vittorio on x
    1) what [image]
  • @zaiforstartups @zaiforstartups on x
    Most models still break mid-task not because they're not smart enough but because they can't stay in the loop 8-hour runs start to change that this is how agents stop breaking.
  • @zixuanli_ Zixuan Li on x
    Going quiet on X for a few days usually means something big is coming
  • @arena @arena on x
    GLM-5.1 by @Zai_org just launched in the Text Arena, and is now the #1 open model. It outperforms the next best open model, its predecessor, GLM-5, by +11 points and +15 over Kimi K2.5 Thinking. It shows strength in: - #1 open model in Longer Query (#4 overall) - #1 open model [i…
  • @theahmadosman Ahmad on x
    INCREDIBLE GLM-5.1 weights are now opensource > i've had early access to the weights for the past few days > and yeah... this one matters a lot benchmarks? > SWE-Bench Pro: 58.4 > beats Opus 4.6 (57.3) > beats GPT-5.4 (57.7) > beats Gemini 3.1 Pro (54.2) let that sink in [image]
  • @zephyr_z9 @zephyr_z9 on x
    BIG [image]
  • @ai_for_success AshutoshShrivastava on x
    Holy moly, we thought Tuesday would be dull, but GLM-5.1 is out and it's freaking open source too 😲 What will it take to run something like this locally? A fortune? [image]
  • @vllm_project @vllm_project on x
    🎉 Day-0 support for GLM-5.1 in vLLM! Congrats to @Zai_org on this next-gen flagship model built for agentic engineering, with stronger coding and sustained long-horizon task performance. Get started 👇 📖 Recipe: https://docs.vllm.ai/... [image]
  • @erikvoorhees Erik Voorhees on x
    Venice delivers in <5m Available for Free to Pro users and DIEM holders in API (this model is killer for agents... first one that feels anecdotally comparable to opus imho)
  • @cryptopunk7213 @cryptopunk7213 on x
    damn new GLM-5.1 model crushes anthropic and openai at agentic coding and 100% open source, how does china keep getting away with this shit? - beat opus 4.6 by 6X on an open-ended coding problem - long-task horizon: 600 turns at a time + 1000s of tool calls. usually ai's just [im…