/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Z.ai, formerly known as Zhipu and that has raised $1.5B from Tencent and others, releases GLM-4.5, an open-source AI model that it says is cheaper than DeepSeek

chinese models really are taking over huh Simon Willison / @simonwillison.net : Pretty decent pelicans from the new GLM-4.5 and GLM-4.5 Air models.  Both models are MIT licensed, released by Chinese AI lab Z.ai this morning  —  simonwillison.net/2025/Jul/28/ ...  [images] Tim Kellogg / @timkellogg.me : fwiw this is a new model from z.ai, an open weights pair of models  —  i'm not holding my breath.  maybe it's good, but this isn't a good way to enter the scene  —  z.ai/blog/glm-4.5 X: @kalomaze : GLM coming from Tsinghua University is fascinating to me. it signals how much more seriously China is taking the advancement of the tech no reason why MIT can't go, “yeah let's build AI at the frontier, we should be a frontier lab” ...yet they don't even try. where's the courage? @theo : glm-4.5 is really really good. It is now available on T3 Chat! [image] Casper Hansen / @casper_hansen_ : GLM 4.5 is 50% cheaper on their Mainland China AI platform until September 1st (called bigmodel). Their GLM 4.5 Air model in FP8 is also entirely free! [image] Will Brown / @willccbb : you can host your own private GLM-4.5-Air endpoint for $1/hr [image] @reach_vb : NEW: GLM-4.5 & GLM-4.5-Air from @Zai_org - competitive w/ claude 4 opus and beats Gemini 2.5 Pro, MIT license🔥 > GLM-4.5: 355B total params, 32B active (MoE) > GLM-4.5-Air: 106B total params, 12B active (MoE) > “Thinking mode” (complex tasks) + “Non-thinking mode” (instant [image] @openrouterai : Possibly the fastest new model to launch on OpenRouter - introducing GLM-4.5 from a new model lab, @Zai_org ! Family of powerful, balanced models punching very high for their weight. Reasoning can be toggled on and off via API. See 👇 for more Ivan Fioravanti / @ivanfioravanti : Here GLM-4.5 (the big one) running on Anycoder on Hugging Face. Great job @_akhaliq 🔥 [video] Casper Hansen / @casper_hansen_ : o3 competitor: GLM 4.5 by Zhipu AI - hybrid reasoning model (on by default) - trained on 15T tokens - 128k context, 96k output tokens - $0.11 / 1M tokens - MoE: 355B A32B and 106B A12B Benchmark details: - tool calling: 90.6% success rate vs Sonnet's 89.5% vs Kimi K2 86.2% - [image] Ivan Fioravanti / @ivanfioravanti : And here it is! The new GLM-4.5! 355 billion total parameters with 32 billion active parameters! MoE... MLX will shine here! 🔥 [image] @kimmonismus : GLM-4.5 sota open source reasoning model! Give me a break, I can hardly keep up with the models! - Total number of parameters: 355 billion - Active parameters per inference: 32 billion - Goal: High performance model for complex agent applications - Functions: Combines [image] @kimmonismus : So, the week started with a new open source MIT-licensed SOTA model (GLM-4.5), an open source video model that is on par with Kling 2.0 (Wan-2.2) — and we know that at least OpenAI's open source model is coming this week. This is going to be a great week! Yupp / @yupp_ai : 📢 New Model Drop: GLM-4.5 & GLM-4.5 Air are now live on Yupp! These new flagship models from @Zai_org are designed to unify frontier reasoning, coding, and agentic capabilities. Here are some results from our testing! [image] Paul Couvert / @itspaulai : Another Chinese lab (Zai) has released a powerful new model GLM-4.5 is on par with Opus 4 and VERY strong in coding and agentic tool use 🔥 → Open source → 32B active parameters → Lighter version GLM-4.5 Air → Hybrid reasoning models You can already use them for free ↓ [image] @zai_org : Thanks to your overwhelming enthusiasm, https://z.ai/ Chat has hit its service capacity. We are working on adding more resources now. While you wait, feel free to dive into our Tech Blog—we've put a lot of sincere effort and detail into it. https://z.ai/... Ethan Mollick / @emollick : These new open source models (GLM, Kimi) continue to be odd. Great stats, some solid performances, but also fail tests that DeepSeek & smaller closed models have beaten for months. [image] Simon Willison / @simonw : Pretty decent pelicans from the new GLM-4.5 and GLM-4.5 Air models. Both models are MIT licensed, released by @Zai_org this morning https://simonwillison.net/... [image] Bill Gurley / @bgurley : ...And just like that there is another one... Zhipu crushing benchmarks. $1.4B raised. Also open. @ZhipuAI @deepseek_ai , then @Kimi_Moonshot , now Zhipu. All co-evolving. @zai_org : To better demonstrate the coding capabilities of GLM-4.5, we have developed a coding agent inspired by Claude Code. By providing a basic full-stack website boilerplate, the agent enables users to create an entire website with just a few words. /1 Build an interactive Pokémon [video] @zai_org : Leveraging GLM-4.5's powerful agentic tool usage and HTML coding capabilities, we developed a model-native Slides/Poster agent. /2 Create a PowerPoint presentation introducing Elon Musk's achievements and his performance in yesterday's Tour de France cycling race. [video] @zai_org : Our hybrid post-training pipeline trains expert models for each domain (reasoning, agentic, general) using cold-start supervised fine-tuning (SFT) and specialized reinforcement learning (RL). The knowledge is then unified via large-scale SFT self-distillation, followed by @zai_org : Introducing GLM-4.5 and GLM-4.5 Air: new flagship models designed to unify frontier reasoning, coding, and agentic capabilities.  GLM-4.5: 355B total / 32B active parameters GLM-4.5-Air: 106B total / 12B active parameters API Pricing (per 1M tokens): GLM-4.5: $0.6 Input / $2.2 Output GLM-4.5-Air: $0.2 Input / $1.1 Output... @zai_org : The model now creates sophisticated standalone artifacts—from interactive mini-games to physics simulations—across HTML, SVG, Python and other formats. /3 Create a 3D particle galaxy with swirling nebulas, dynamic lighting. [video] @zai_org : On the SWE-bench Verified benchmark, our Pareto frontier analysis shows that GLM-4.5 and GLM-4.5-Air deliver the best performance at their respective scales. [image] Forums: Hacker News : GLM-4.5: Reasoning, Coding, and Agentic Abililties r/LLMDevs : China's latest AI model claims to be even cheaper to use than DeepSeek r/singularity : GLM-4.5: Reasoning, Coding, and Agentic Abililties r/technology : China's latest AI model claims to be even cheaper to use than DeepSeek

CNBC Evelyn Cheng

Discussion

  • @mary.my.id Mary on bluesky
    z.ai/blog/glm-4.5  —  chinese models really are taking over huh
  • @simonwillison.net Simon Willison on bluesky
    Pretty decent pelicans from the new GLM-4.5 and GLM-4.5 Air models.  Both models are MIT licensed, released by Chinese AI lab Z.ai this morning  —  simonwillison.net/2025/Jul/28/ ...  [images]
  • @timkellogg.me Tim Kellogg on bluesky
    fwiw this is a new model from z.ai, an open weights pair of models  —  i'm not holding my breath.  maybe it's good, but this isn't a good way to enter the scene  —  z.ai/blog/glm-4.5
  • @kalomaze @kalomaze on x
    GLM coming from Tsinghua University is fascinating to me. it signals how much more seriously China is taking the advancement of the tech no reason why MIT can't go, “yeah let's build AI at the frontier, we should be a frontier lab” ...yet they don't even try. where's the courage?
  • @theo @theo on x
    glm-4.5 is really really good. It is now available on T3 Chat! [image]
  • @casper_hansen_ Casper Hansen on x
    GLM 4.5 is 50% cheaper on their Mainland China AI platform until September 1st (called bigmodel). Their GLM 4.5 Air model in FP8 is also entirely free! [image]
  • @willccbb Will Brown on x
    you can host your own private GLM-4.5-Air endpoint for $1/hr [image]
  • @reach_vb @reach_vb on x
    NEW: GLM-4.5 & GLM-4.5-Air from @Zai_org - competitive w/ claude 4 opus and beats Gemini 2.5 Pro, MIT license🔥 > GLM-4.5: 355B total params, 32B active (MoE) > GLM-4.5-Air: 106B total params, 12B active (MoE) > “Thinking mode” (complex tasks) + “Non-thinking mode” (instant [image…
  • @openrouterai @openrouterai on x
    Possibly the fastest new model to launch on OpenRouter - introducing GLM-4.5 from a new model lab, @Zai_org ! Family of powerful, balanced models punching very high for their weight. Reasoning can be toggled on and off via API. See 👇 for more
  • @ivanfioravanti Ivan Fioravanti on x
    Here GLM-4.5 (the big one) running on Anycoder on Hugging Face. Great job @_akhaliq 🔥 [video]
  • @casper_hansen_ Casper Hansen on x
    o3 competitor: GLM 4.5 by Zhipu AI - hybrid reasoning model (on by default) - trained on 15T tokens - 128k context, 96k output tokens - $0.11 / 1M tokens - MoE: 355B A32B and 106B A12B Benchmark details: - tool calling: 90.6% success rate vs Sonnet's 89.5% vs Kimi K2 86.2% - [ima…
  • @ivanfioravanti Ivan Fioravanti on x
    And here it is! The new GLM-4.5! 355 billion total parameters with 32 billion active parameters! MoE... MLX will shine here! 🔥 [image]
  • @kimmonismus @kimmonismus on x
    GLM-4.5 sota open source reasoning model! Give me a break, I can hardly keep up with the models! - Total number of parameters: 355 billion - Active parameters per inference: 32 billion - Goal: High performance model for complex agent applications - Functions: Combines [image]
  • @kimmonismus @kimmonismus on x
    So, the week started with a new open source MIT-licensed SOTA model (GLM-4.5), an open source video model that is on par with Kling 2.0 (Wan-2.2) — and we know that at least OpenAI's open source model is coming this week. This is going to be a great week!
  • @yupp_ai Yupp on x
    📢 New Model Drop: GLM-4.5 & GLM-4.5 Air are now live on Yupp! These new flagship models from @Zai_org are designed to unify frontier reasoning, coding, and agentic capabilities. Here are some results from our testing! [image]
  • @itspaulai Paul Couvert on x
    Another Chinese lab (Zai) has released a powerful new model GLM-4.5 is on par with Opus 4 and VERY strong in coding and agentic tool use 🔥 → Open source → 32B active parameters → Lighter version GLM-4.5 Air → Hybrid reasoning models You can already use them for free ↓ [image]
  • @zai_org @zai_org on x
    Thanks to your overwhelming enthusiasm, https://z.ai/ Chat has hit its service capacity. We are working on adding more resources now. While you wait, feel free to dive into our Tech Blog—we've put a lot of sincere effort and detail into it. https://z.ai/...
  • @emollick Ethan Mollick on x
    These new open source models (GLM, Kimi) continue to be odd. Great stats, some solid performances, but also fail tests that DeepSeek & smaller closed models have beaten for months. [image]
  • @simonw Simon Willison on x
    Pretty decent pelicans from the new GLM-4.5 and GLM-4.5 Air models. Both models are MIT licensed, released by @Zai_org this morning https://simonwillison.net/... [image]
  • @bgurley Bill Gurley on x
    ...And just like that there is another one... Zhipu crushing benchmarks. $1.4B raised. Also open. @ZhipuAI @deepseek_ai , then @Kimi_Moonshot , now Zhipu. All co-evolving.
  • @zai_org @zai_org on x
    To better demonstrate the coding capabilities of GLM-4.5, we have developed a coding agent inspired by Claude Code. By providing a basic full-stack website boilerplate, the agent enables users to create an entire website with just a few words. /1 Build an interactive Pokémon [vid…
  • @zai_org @zai_org on x
    Leveraging GLM-4.5's powerful agentic tool usage and HTML coding capabilities, we developed a model-native Slides/Poster agent. /2 Create a PowerPoint presentation introducing Elon Musk's achievements and his performance in yesterday's Tour de France cycling race. [video]
  • @zai_org @zai_org on x
    Our hybrid post-training pipeline trains expert models for each domain (reasoning, agentic, general) using cold-start supervised fine-tuning (SFT) and specialized reinforcement learning (RL). The knowledge is then unified via large-scale SFT self-distillation, followed by
  • @zai_org @zai_org on x
    Introducing GLM-4.5 and GLM-4.5 Air: new flagship models designed to unify frontier reasoning, coding, and agentic capabilities.  GLM-4.5: 355B total / 32B active parameters GLM-4.5-Air: 106B total / 12B active parameters API Pricing (per 1M tokens): GLM-4.5: $0.6 Input / $2.2 Ou…
  • @zai_org @zai_org on x
    The model now creates sophisticated standalone artifacts—from interactive mini-games to physics simulations—across HTML, SVG, Python and other formats. /3 Create a 3D particle galaxy with swirling nebulas, dynamic lighting. [video]
  • @zai_org @zai_org on x
    On the SWE-bench Verified benchmark, our Pareto frontier analysis shows that GLM-4.5 and GLM-4.5-Air deliver the best performance at their respective scales. [image]
  • r/LLMDevs r on reddit
    China's latest AI model claims to be even cheaper to use than DeepSeek
  • r/singularity r on reddit
    GLM-4.5: Reasoning, Coding, and Agentic Abililties
  • r/technology r on reddit
    China's latest AI model claims to be even cheaper to use than DeepSeek