/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Alibaba's Hong Kong-listed shares hit a nearly four-year high after CEO Eddie Wu announced plans to increase AI spending beyond the $53B target over three years

Alibaba Group Holding Ltd.'s shares surged to their highest in nearly four years after revealing plans to ramp up AI spending past …

Bloomberg Luz Ding

Discussion

  • @scaling01 @scaling01 on x
    Alibaba CEO is ASI pilled
  • @poezhao0605 Poe Zhao on x
    Alibaba Cloud AI metrics: 5x computing power growth, 4x storage growth year-over-year. Half of China's major foundation model companies and Fortune 500 firms using GenAI are on their platform. Scale matters in the AI race. [image]
  • @scaling01 @scaling01 on x
    apparently Alibaba Cloud 5x their available compute
  • r/wallstreetbets r on reddit
    Alibaba Shares Soar After Hiking AI Budget Past $50 Billion
  • r/baba r on reddit
    Alibaba Shares Jump After CEO Reveals Plans to Raise AI Spending
  • @alibaba_qwen @alibaba_qwen on x
    We're excited to announce the upgrade of Qwen3-Coder, and the upgraded API ‘qwen3-coder-plus’ is now available on Alibaba Cloud Model Studio with major improvements: 💻 Enhanced terminal task capabilities and better performance on Terminal Bench (w/ Qwen Code / Claude Code) 🏆 [ima…
  • @emollick Ethan Mollick on x
    So far, Qwen3-Max seems impressive for a non-reasoning model, doing a good job at a lot of my weird tests that even some reasoners struggle with. [image]
  • @alibaba_qwen @alibaba_qwen on x
    🚀 Introducing Qwen3-LiveTranslate-Flash — Real‑Time Multimodal Interpretation — See It, Hear It, Speak It! 🌐 Wide language coverage — Understands 18 languages & 6 dialects, speaks 10 languages. 👁️ Vision‑Enhanced Comprehension — Reads lips, gestures, on‑screen text and [image]
  • @alibaba_qwen @alibaba_qwen on x
    🛡️ Meet Qwen3Guard — the Qwen3-based safety moderation model series built for global, real-time AI safety! 🌍 Supports 119 languages and dialects ✅ 3 sizes available: 0.6B, 4B, 8B ⚡ Low-latency, Real-time streaming detection with Qwen3Guard-Stream 📝 Robust Full-context safety [ima…
  • @bindureddy Bindu Reddy on x
    The leaders in open source are far and away Qwen and DeepSeek US still lags way behind in this category For example - we still have to all the way back to Llama, if we need a fine-tune based on a US base model
  • @awnihannun Awni Hannun on x
    Just for fun, here's what 32 simultaneous long-context generations with Qwen3 Next 80B looks like on an M3 Ultra. Using the new batch generation in mlx-lm. Context size for each is about 5k tokens: [video]
  • @teortaxestex @teortaxestex on x
    This is what innovation looks like in ML. Lots of small things combined. Qwen3 VL is SOTA. And yet... always something subtle missing with these guys. It's the first model that has seen *a* connection and dismissed it in favor of the cat's psyop. But its vision is clear at least.…
  • @lateinteraction Omar Khattab on x
    Sort of like calling a Qwen3-235B-A22B “qwen 50B” for short.
  • @theahmadosman Ahmad on x
    qwen3 omni technical paper summary > qwen3-omni is one model for everything > text, vision, audio, speech, and video > beats chatgpt 4o and gemini 2.5 in reasoning and recognition > 30B model, only 3B active parameters per token > runs on consumer hardware with ease, usable [imag…
  • @andimarafioti Andi Marafioti on x
    Qwen3-Omni is here, and it's a huge step for omni-modal AI. 🔥 Give it anything and get back text or surprisingly natural speech. Just watch the demo: it reads data from a table and then analyzes a snowboarder's balance in a video. The future is now. 🤯 [video]
  • @justinlin610 Junyang Lin on x
    This is the 1st shot! For a long time people just don't have any idea about the safety work that we have invested efforts in. This time, we show you our safety guard model,Qwen3 Guard, specifically including generative guard Qwen3Guard-Gen and streaming guard model with
  • @_akhaliq @_akhaliq on x
    qwen3-coder-plus is now available on Anycoder Enhanced terminal task capabilities and better performance on Terminal Bench (w/ Qwen Code / Claude Code) SWE-Bench performance up to 69.6 Safer code generation available as Qwen3-Coder-Plus-2025-09-23 [video]
  • @omarsar0 Elvis on x
    Qwen3-Omni Technical Report A unified multimodal model that matches same-size Qwen text-only and vision-only baselines while pushing audio and audio-visual SOTA. Key technical details below: [image]
  • @tianbaox Tianbao Xie on x
    After another half year, we are glad to bring Qwen3-VL! It's definitely the best open model you can access to start your digital agent and physical agent journey. Thanks the whole team! @shuai_bai_ @huybery @DunjieLu1219 @xuhaiya2483846 @JustinLin610
  • @justinlin610 Junyang Lin on x
    This is the 5th shot! Super crazy! We opensourced a 235B-A22B Instruct and Thinking Qwen3-VL models under Apache 2.0! Qwen3-VL, the new generation of our vision-language model, whose previous version was released a long time ago. During these days, we have conducted a lot of
  • @sixsigmacapital @sixsigmacapital on x
    $BABA This could be Alibaba's mini chat-GPT moment.
  • @arankomatsuzaki Aran Komatsuzaki on x
    RLPT: Reinforcement Learning on Pre-Training Data • RL directly on pre-train data (no human labels) • Next-segment reasoning objective (ASR + MSR tasks) → self-supervised rewards • Gains on Qwen3-4B: +3.0 MMLU, +8.1 GPQA-Diamond, +6.6 AIME24, +5.3 AIME25 [image]
  • @ai_for_success AshutoshShrivastava on x
    In the last 12 hours, Qwen has released: > Qwen3Guard > Personal AI Travel Designer > Qwen3-LiveTranslate-Flash > Upgrade Qwen3-Coder > Qwen3-VL-235B-A22B > Qwen3-Max The Qwen team is crazy 🔥 [image]
  • @tryagentsea @tryagentsea on x
    Alibaba has: - best open weights image model (qwen image) - best open weights image editing model (qwen image edit 2509) - best sota open weights vision model (qwen3 vl) - best open weights video inpainting model (wan 2.2 animate) - one of the best foundation models (qwen3 max)
  • @justine_chang39 Justine Chang on x
    Ok just did my image cropping test on Qwen3 VL It is, ON PAR, if not BETTER than Gemini 2.5 Pro for my use case 🤯 This is the FIRST non-Gemini model to be able to do this. This is really really good!! @Alibaba_Qwen @huybery @JustinLin610 [image]
  • @reach_vb @reach_vb on x
    NEW: Qwen 235B A22B Vision Language Model is OUTT! Apache 2.0 licensed and upto 1 Million context length 🤯 https://huggingface.co/...
  • @mervenoyann Merve on x
    my vibe tests with Qwen3-Omni family of models > document performance with Instruct is very good 🎯 > video understanding is nice ⏯️ > Thinking performs better in English > I suggest to use Captioner if you really want audio output, other two hallucinates a bit [video]
  • @alibaba_qwen @alibaba_qwen on x
    🚀 Qwen3-Max is here—no preview, just power! Qwen Chat: https://chat.qwen.ai/ Blog: https://qwen.ai/... API: https://www.alibabacloud.com/ ... We've supercharged coding & agentic skills—now Qwen3-Max-Instruct without thinking rivaling top models on SWE-Bench, Tau2-Bench, [image]
  • @chujiezheng Chujie Zheng on x
    Qwen3-VL, this is what you many guys are always wanting. Enjoy 🍻
  • @alibaba_qwen @alibaba_qwen on x
    🚀 We're thrilled to unveil Qwen3-VL — the most powerful vision-language model in the Qwen series yet! 🔥 The flagship model Qwen3-VL-235B-A22B is now open-sourced and available in both Instruct and Thinking versions: ✅ Instruct outperforms Gemini 2.5 Pro on key vision [image]
  • @huybery Binyuan Hui on x
    We have released Qwen3-Max, the most powerful Qwen model to date! By continuously scaling up model size, data, and RL tasks, great things have happened. This time, coding and agent capabilities have also been significantly enhanced—enjoy!
  • @jw2yang4ai Jianwei Yang on x
    🚀Excited to see Qwen3-VL released as the new SOTA open-source vision-language model! What makes it extra special is that it's powered by DeepStack, a technique I co-developed with Lingchen, who is now a core contributor of Qwen3-VL. When Lingchen and I developed this technique
  • @sungkim Sung Kim on bluesky
    Chinese AI has caught up with leading U.S.-based labs.  Alibaba's release of Qwen3-Max places it alongside frontier AI players like Anthropic, Google, OpenAI, and xAI.  —  qwen.ai/blog?id=2413...  [images]
  • @timkellogg.me Tim Kellogg on bluesky
    Qwen3-Max: Just Scale It  —  it's now safe to say Qwen is a frontier lab  —  qwen.ai/blog?id=2413...  [image]