/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Google launches Gemma 4, its “most intelligent” open model family, purpose-built for advanced reasoning and agentic workflows, under an Apache 2.0 license

C  —  O  —  Group Product Manager, Google DeepMind  —  Today, we are introducing Gemma 4 — our most intelligent open models to date.

The Keyword

Discussion

  • @kimmonismus @kimmonismus on x
    Here we go: Gamma 4 released: ""Outperforms models 20x its size" Google dropped Gemma 4 under Apache 2.0, full open-source, big licensing shift. Built on Gemini 3 tech, four sizes: E2B, E4B, 26B MoE, 31B Dense. Price-performance: 31B is #3 open model on Arena AI, 26B MoE is #6 [i…
  • @xenovacom @xenovacom on x
    NEW: Google releases Gemma 4, their most capable open models yet! 🤯 Apache-2.0, multimodal (text, image, and audio input), and multilingual (140 languages)! They can even run 100% locally in your browser on WebGPU. Watch it describe the Artemis II launch! 🚀 Try the demo! 👇 [video…
  • @lmsysorg @lmsysorg on x
    🎉 Congrats on the Gemma 4 launch from @googlegemma, day-0 support is now live in SGLang! Gemma 4 is a multimodal family (4 sizes: E2B, E4B, 26B A4B, and 31B) with both Dense and MoE architectures, built for everything from mobile to server-scale: 👁️ Rich multimodal [image]
  • @mayhem4markets @mayhem4markets on x
    New Google Gemma 4 AI models just dropped 🔥 > 31B Dense + 26B MoE — competitive with GPT-4 class > Mobile versions with real-time vision/audio > 256K context > Autonomous agents with native tool use > Apache 2.0 license Build your own coding assistant. No API required. [image]
  • @sundarpichai Sundar Pichai on x
    Gemma 4 is here, and it's packing an incredible amount of intelligence per parameter 👇
  • @triswarkentin Tris Warkentin on x
    Gemma 4 is here! Performance that beats top open models at 10-20x smaller size. One truly amazing achievement: these are the first Gemma models to achieve state-of-the-art coding and agentic capabilities as well. We are excited to see what you build with them!
  • @jeffdean Jeff Dean on x
    Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up
  • @dynamicwebpaige @dynamicwebpaige on x
    🙌 The future is open-source models!!
  • @osanseviero Omar Sanseviero on x
    Gemma 4 is here! 🧠 31B and 26B A4B for models with impressive intelligence per parameter 🤏E2B and E4B for mobile and IoT 🤗Apache 2.0 🤖Base and IT checkpoints available Available in AI Studio, Hugging Face, Ollama, Android, and your favorite OS tools 🚀Download it today! [image]
  • @teksedge David Hendrickson on x
    🚨 Gemma 4 is released. Open Source and ready to run on your RTX card, Mac Studio or Strix Halo PC. [image]
  • @clmt Clément Farabet on x
    💎💎💎💎 Huge news today: we're launching #Gemma4! Our most capable open models yet. 🔓 Apache 2.0: Complete flexibility and digital sovereignty 🧠 Advanced Reasoning: Multi-step planning and deep logic 🛠️ Agentic Workflows: Native support for function-calling and structured [image]
  • @matvelloso Mat Velloso on x
    Apache 2.0!! 👀
  • @mweinbach Max Weinbach on x
    New Gemma 4 models! 4 of them, Gemma 4 E2B & E4B for mobile Gemma 4 26B (MoE model!) & 31B for laptop/GPUs I'm going to try these out quite a bit more this afternoon
  • @_philschmid Philipp Schmid on x
    Gemma 4 is here! 4⃣Our most capable, agentic open model, built on the same research as Gemini 3. ✨ Reasoning. Multimodal. Four sizes (2B to 31B). Base + Instruct. Released under Apache 2.0. Runs on your phone, laptop, or servers. 🧵↓ [image]
  • @scaling01 @scaling01 on x
    Gemma-4 31B is insane
  • @thorwebdev @thorwebdev on x
    Meet Gemma 4: our most intelligent family of open models yet. 🚀 Built from Gemini 3 research, it delivers massive reasoning and agentic power in a footprint small enough to run locally! We're releasing it under Apache 2.0 so you can deploy state-of-the-art AI anywhere! 🥳 [video]
  • @natolambert Nathan Lambert on x
    Google dropped 4 different Gemma open-weight models! I'm most excited that they're finally adopting a standard Apache 2.0 open source license. This'll massively boost adoption. The standard of better licenses was set by mostly Chinese open model labs, and now labs in the U.S. [im…
  • @officiallogank Logan Kilpatrick on x
    Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model! [im…
  • @googledeepmind @googledeepmind on x
    Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we're releasing them under an Apache 2.0 license. Here's what's new 🧵 [image]
  • @googlegemma @googlegemma on x
    git commit -m “bump”
  • @minchoi Min Choi on x
    This is wild. Google just dropped Gemma 4. Apache 2.0, open weights, frontier models that run on phones, laptops, and desktops👇 [video]
  • @conorbronsdon Conor Bronsdon on x
    Gemma 4 is launched & live on @Modular Cloud with the fastest inference performance in the industry on both NVIDIA B200 and AMD MI355X 🥳 Day zero - and we're 15% faster than vLLM while offering the only platform that covers both architectures. Two models, two GPU platforms, [imag…
  • @kakatohesss Mathieu Leclercq on x
    🚀 Gemma 4 is switching to Apache 2.0, and it's a total game-changer for indie - 26B/31B locally on a laptop -> agent workflows, code, multimodal (text/audio/vision) - 2B/4B ultra-lightweight-> runs directly on a smartphone - Local-first -> Zero cloud bills, 100% data
  • @timkellogg.me Tim Kellogg on bluesky
    Gemma 4 Day  —  near-Kimi 2.5 on your laptop  — 32B & 26B-A4B  — effective 4B & 2B for mobile  — Apache 2  —  blog.google/innovation-a...  [embedded post]
  • @i Demis Hassabis on x
    Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
  • r/Bard r on reddit
    Gemma 4: Byte for byte, the most capable open models
  • @sriramk Sriram Krishnan on x
    Really excited for this launch of Gemma 4 from @demishassabis and the DeepMind team. Open source models are a key front for the west to have a lead on and this is a very key addition to the effort. Excited to see what developers in SV and around the world can build using this.
  • @itspaulai Paul Couvert on x
    Gemma 4 is even more impressive than it seems This new E4B is MUCH better than the previous 27B version... While being 6x smaller 🤯 So you've a model running on your phone that is superior to what you could run on a high-end computer 1 year ago. Even the E2B is insane. [image]
  • @demishassabis Demis Hassabis on x
    Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
  • r/Android r on reddit
    Gemma 4: Byte for byte, the most capable open models
  • r/ollama r on reddit
    Google's Gemma 4 has been published and is available under Apache 2.0 license