/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Qwen releases Qwen3-Max-Thinking, its flagship reasoning model that it says demonstrates performance comparable to models such as GPT-5.2 Thinking and Opus 4.5

· QwenTeam丨Translations:.体中文  —  Introduction#  —  We present Qwen3-Max-Thinking, our latest flagship reasoning model.

Qwen

Discussion

  • @alibaba_qwen @alibaba_qwen on x
    🚀 Introducing Qwen3-Max-Thinking, our most capable reasoning model yet. Trained with massive scale and advanced RL, it delivers strong performance across reasoning, knowledge, tool use, and agent capabilities. ✨ Key innovations: ✅ Adaptive tool-use: intelligently leverages [image…
  • @casper_hansen_ Casper Hansen on x
    1T parameter model released by Qwen!! They finally compare to the best models available out there unlike the rest of open-weight providers Although it's released, unfortunately it's not open weights on Huggingface
  • @_simonsmith Simon Smith on x
    New Qwen model significantly improves at Humanity's Last Exam with tool use and now appears to be SOTA here by a large margin? Main innovation driving this seems to be a non-parallel approach to test time scaling where the model avoids redundant thinking. On the surface the [imag…
  • @kimmonismus @kimmonismus on x
    The gap between US and Chinese models are closing even faster. [image]
  • @_thomasip Thomas Ip on x
    First major model of 2026 and it's from Qwen! It's a chinese trillion+ parameters closed-source model that is SOTA on a few benchmarks. In particular it beats everyone by a wide margin on HLE (with search). All the US labs are dropping new models soon so the lead from Qwen will
  • @justinlin610 Junyang Lin on x
    besides the capabilities mentioned in the tweet, i would like to invite you to play with the new thinking model on https://chat.qwen.ai/ . this time we level up the user experience by integrating search and code interpreter and memory in thinking. try to see if it is a better
  • r/singularity r on reddit
    Qwen3-Max-Thinking
  • r/LocalLLaMA r on reddit
    Pushing Qwen3-Max-Thinking Beyond its Limits