Google launches Gemma 4, its “most intelligent” open model family, purpose-built for advanced reasoning and agentic workflows, under an Apache 2.0 license
C — O — Group Product Manager, Google DeepMind — Today, we are introducing Gemma 4 — our most intelligent open models to date.
The Keyword
Related Coverage
- Google announces Gemma 4 open AI models, switches to Apache 2.0 license Ars Technica · Ryan Whitwam
- Google's new Gemma 4 models bring complex reasoning skills to low-power devices SiliconANGLE · Mike Wheatley
- Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones ZDNET · David Gewirtz
- Google announces open Gemma 4 model with Apache 2.0 license 9to5Google · Abner Li
- Google releases Gemma 4, a family of open models built off of Gemini 3 Engadget · Igor Bonifacic
- Google launches Gemma 4 open-source LLM family Constellation Research · Larry Dignan
- From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI NVIDIA · Michael Fukuyama
- Android Studio supports Gemma 4: our most capable local model for agentic coding Android Developers Blog
- Google rethinks the AI model race with Gemma 4 The Deep View · Sabrina Ortiz
- Introducing Gemma 4 on Google Cloud: Our most capable open models yet Google Cloud Blog · Richard Seroter
- Google previews Gemini Nano 4 for Android AICore, coming this year 9to5Google · Abner Li
- Announcing Gemma 4 in the AICore Developer Preview Android Developers Blog
- We've just released Gemma 4!! — Very strong models, small for you to run on your own hardware!! — AMA — (this is why I've been silent for so long!!!) … Gus Martins
- Gemma 4: Byte for byte, the most capable open models. Four new vision-capable Apache 2 … Simon Willison's Weblog · Simon Willison
- Google unveils Gemma 4, expands lightweight open model lineup for developers The Economic Times
- Google Jumps Back Into the Open Source AI Race With Gemma 4 Decrypt · Jose Antonio Lanz
- Microsoft and Google Launch New AI Models Thurrott · Laurent Giret
- Google releases Gemma 4 under Apache 2.0 — and that license change may matter more than benchmarks VentureBeat · Sam Witteveen
- Want to make the most of the new Gemma 4 AI models? RTX GPUs and PCs accelerate local AI like never before PCWorld
- Google launches Gemma 4: four open-weight models from smartphones to workstations The Next Web · Ana-Maria Stanciuc
- Google's Gemma 4 is now available with Apache 2.0 licensing for the first time The Decoder · Matthias Bastian
- Gemma 4 — Gemma 4 is Google's most capable family of open models, built from Gemini 3 research. Run Gemma 4 locally with LM Studio
- AI is moving from cloud dependency to local sovereignty (edge as they say in the tech world). Google launched Gemma4 today. It is an Open model. … M Mohan
- Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License Slashdot · BeauHD
- Google's new Gemma 4 ‘open’ AI model sets developers free. The Verge · Terrence O'Brien
- Google Releases Gemma 4 Under Apache 2.0, Dropping Its Custom AI License Implicator.ai · Harkaram Grewal
- Google Launches ‘Truly Open Source’ Gemma 4 Techstrong.ai · Jon Swartz
- Gemma4 is out. Maybe I'm a bit biased but I do think these models are fantastic. Now download them and have fun this weekend! — https://lnkd.in/... Ravin Kumar
- Google releases its most powerful open-source AI models yet, that's free to use commercially Neowin · Karthik Mudaliar
- Why would Google release such a capable model as Gemma 4, as open source? (meaning, you can download it for free and use it for free if you have the computing power to do so). … Eric Fraser
- Google battles Chinese open-weights models with Gemma 4 The Register · Tobias Mann
- Gemma Terms of Use Google AI for Developers
- Defeating the ‘Token Tax’: How Google Gemma 4, NVIDIA, and OpenClaw are Revolutionizing Local Agentic AI: From RTX Desktops to DGX Spark MarkTechPost · Jean-Marc Mommessin
- What I'm seeing on the front lines of private markets — 1. OpenAI Codex and Claude Code are in a dead heat to capture the engineering market … Ed Brandman
- Google debuts Gemma 4 open AI models for local use TestingCatalog · Erin
Discussion
-
@kimmonismus
@kimmonismus
on x
Here we go: Gamma 4 released: ""Outperforms models 20x its size" Google dropped Gemma 4 under Apache 2.0, full open-source, big licensing shift. Built on Gemini 3 tech, four sizes: E2B, E4B, 26B MoE, 31B Dense. Price-performance: 31B is #3 open model on Arena AI, 26B MoE is #6 [i…
-
@xenovacom
@xenovacom
on x
NEW: Google releases Gemma 4, their most capable open models yet! 🤯 Apache-2.0, multimodal (text, image, and audio input), and multilingual (140 languages)! They can even run 100% locally in your browser on WebGPU. Watch it describe the Artemis II launch! 🚀 Try the demo! 👇 [video…
-
@lmsysorg
@lmsysorg
on x
🎉 Congrats on the Gemma 4 launch from @googlegemma, day-0 support is now live in SGLang! Gemma 4 is a multimodal family (4 sizes: E2B, E4B, 26B A4B, and 31B) with both Dense and MoE architectures, built for everything from mobile to server-scale: 👁️ Rich multimodal [image]
-
@mayhem4markets
@mayhem4markets
on x
New Google Gemma 4 AI models just dropped 🔥 > 31B Dense + 26B MoE — competitive with GPT-4 class > Mobile versions with real-time vision/audio > 256K context > Autonomous agents with native tool use > Apache 2.0 license Build your own coding assistant. No API required. [image]
-
@sundarpichai
Sundar Pichai
on x
Gemma 4 is here, and it's packing an incredible amount of intelligence per parameter 👇
-
@triswarkentin
Tris Warkentin
on x
Gemma 4 is here! Performance that beats top open models at 10-20x smaller size. One truly amazing achievement: these are the first Gemma models to achieve state-of-the-art coding and agentic capabilities as well. We are excited to see what you build with them!
-
@jeffdean
Jeff Dean
on x
Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up
-
@dynamicwebpaige
@dynamicwebpaige
on x
🙌 The future is open-source models!!
-
@osanseviero
Omar Sanseviero
on x
Gemma 4 is here! 🧠 31B and 26B A4B for models with impressive intelligence per parameter 🤏E2B and E4B for mobile and IoT 🤗Apache 2.0 🤖Base and IT checkpoints available Available in AI Studio, Hugging Face, Ollama, Android, and your favorite OS tools 🚀Download it today! [image]
-
@teksedge
David Hendrickson
on x
🚨 Gemma 4 is released. Open Source and ready to run on your RTX card, Mac Studio or Strix Halo PC. [image]
-
@clmt
Clément Farabet
on x
💎💎💎💎 Huge news today: we're launching #Gemma4! Our most capable open models yet. 🔓 Apache 2.0: Complete flexibility and digital sovereignty 🧠 Advanced Reasoning: Multi-step planning and deep logic 🛠️ Agentic Workflows: Native support for function-calling and structured [image]
-
@matvelloso
Mat Velloso
on x
Apache 2.0!! 👀
-
@mweinbach
Max Weinbach
on x
New Gemma 4 models! 4 of them, Gemma 4 E2B & E4B for mobile Gemma 4 26B (MoE model!) & 31B for laptop/GPUs I'm going to try these out quite a bit more this afternoon
-
@_philschmid
Philipp Schmid
on x
Gemma 4 is here! 4⃣Our most capable, agentic open model, built on the same research as Gemini 3. ✨ Reasoning. Multimodal. Four sizes (2B to 31B). Base + Instruct. Released under Apache 2.0. Runs on your phone, laptop, or servers. 🧵↓ [image]
-
@scaling01
@scaling01
on x
Gemma-4 31B is insane
-
@thorwebdev
@thorwebdev
on x
Meet Gemma 4: our most intelligent family of open models yet. 🚀 Built from Gemini 3 research, it delivers massive reasoning and agentic power in a footprint small enough to run locally! We're releasing it under Apache 2.0 so you can deploy state-of-the-art AI anywhere! 🥳 [video]
-
@natolambert
Nathan Lambert
on x
Google dropped 4 different Gemma open-weight models! I'm most excited that they're finally adopting a standard Apache 2.0 open source license. This'll massively boost adoption. The standard of better licenses was set by mostly Chinese open model labs, and now labs in the U.S. [im…
-
@officiallogank
Logan Kilpatrick
on x
Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model! [im…
-
@googledeepmind
@googledeepmind
on x
Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we're releasing them under an Apache 2.0 license. Here's what's new 🧵 [image]
-
@googlegemma
@googlegemma
on x
git commit -m “bump”
-
@minchoi
Min Choi
on x
This is wild. Google just dropped Gemma 4. Apache 2.0, open weights, frontier models that run on phones, laptops, and desktops👇 [video]
-
@conorbronsdon
Conor Bronsdon
on x
Gemma 4 is launched & live on @Modular Cloud with the fastest inference performance in the industry on both NVIDIA B200 and AMD MI355X 🥳 Day zero - and we're 15% faster than vLLM while offering the only platform that covers both architectures. Two models, two GPU platforms, [imag…
-
@kakatohesss
Mathieu Leclercq
on x
🚀 Gemma 4 is switching to Apache 2.0, and it's a total game-changer for indie - 26B/31B locally on a laptop -> agent workflows, code, multimodal (text/audio/vision) - 2B/4B ultra-lightweight-> runs directly on a smartphone - Local-first -> Zero cloud bills, 100% data
-
@timkellogg.me
Tim Kellogg
on bluesky
Gemma 4 Day — near-Kimi 2.5 on your laptop — 32B & 26B-A4B — effective 4B & 2B for mobile — Apache 2 — blog.google/innovation-a... [embedded post]
-
@i
Demis Hassabis
on x
Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
-
r/Bard
r
on reddit
Gemma 4: Byte for byte, the most capable open models
-
@sriramk
Sriram Krishnan
on x
Really excited for this launch of Gemma 4 from @demishassabis and the DeepMind team. Open source models are a key front for the west to have a lead on and this is a very key addition to the effort. Excited to see what developers in SV and around the world can build using this.
-
@itspaulai
Paul Couvert
on x
Gemma 4 is even more impressive than it seems This new E4B is MUCH better than the previous 27B version... While being 6x smaller 🤯 So you've a model running on your phone that is superior to what you could run on a high-end computer 1 year ago. Even the E2B is insane. [image]
-
@demishassabis
Demis Hassabis
on x
Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
-
r/Android
r
on reddit
Gemma 4: Byte for byte, the most capable open models
-
r/ollama
r
on reddit
Google's Gemma 4 has been published and is available under Apache 2.0 license