Google launches Gemma 4, its “most intelligent” open model family, built for advanced reasoning and agentic workflows and available under an Apache 2.0 license
Today, we are introducing Gemma 4 — our most intelligent open models to date. Purpose-built for advanced reasoning …
The Keyword
Related Coverage
- Google Unveils Gemma 4: Open AI Model That Runs Directly on Smartphones The Hans India · Kahekashan
- Google unveils Gemma 4 open models that can run on smartphones, PC: Details Business Standard · Aashish Kumar Shrivastava
- From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI NVIDIA · Michael Fukuyama
- Google announces Gemma 4 open AI models, switches to Apache 2.0 license Ars Technica · Ryan Whitwam
- [AINews] Gemma 4: The best small Multimodal Open Models, dramatically better than Gemma 3 in every way Latent.Space · Engagement
- Announcing Gemma 4 in the AICore Developer Preview Android Developers Blog
- Gemma 4 — Gemma 4 is Google's most capable family of open models, built from Gemini 3 research. Run Gemma 4 locally with LM Studio
- ✨Gemma 4 - How to Run Locally — Run Google's new Gemma 4 models locally, including E2B, E4B, 26B A4B, and 31B. Unsloth
- Gemma Terms of Use Google AI for Developers
- Google launches Gemma 4 AI models: Features, capabilities and how to use Digit · Ayushi Jain
- Gemma 4 imbibes Google's sharpest AI instincts, and is more welcoming Hindustan Times · Vishal Mathur
- Google releases Gemma 4 under Apache 2.0 — and that license change may matter more than benchmarks VentureBeat · Sam Witteveen
- Google's Gemma 4 is now available with Apache 2.0 licensing for the first time The Decoder · Matthias Bastian
- Google launches Gemma 4, a new open-source model: How to try it Mashable · Matt Binder
- Four Open Models Just Proved You Can Own Frontier AI at Every Scale TheNeuron · Grant Harvey
- Gemma 4: Expanding the Gemmaverse with Apache 2.0 Google Open Source Blog
- Google battles Chinese open-weights models with Gemma 4 The Register · Tobias Mann
- Google debuts Gemma 4 open AI models for local use TestingCatalog · Erin
- Defeating the ‘Token Tax’: How Google Gemma 4, NVIDIA, and OpenClaw are Revolutionizing Local Agentic AI: From RTX Desktops to DGX Spark MarkTechPost · Jean-Marc Mommessin
- Bringing AI Closer to the Edge and On-Device with Gemma 4 NVIDIA Technical Blog · Anu Srivastava
- Google's new Gemma 4 ‘open’ AI model sets developers free. The Verge · Terrence O'Brien
- Google releases its most powerful open-source AI models yet, that's free to use commercially Neowin · Karthik Mudaliar
- Google Launches ‘Truly Open Source’ Gemma 4 Techstrong.ai · Jon Swartz
- Google Releases Gemma 4 Under Apache 2.0, Dropping Its Custom AI License Implicator.ai · Harkaram Grewal
- Google launches Gemma 4: four open-weight models from smartphones to workstations The Next Web · Ana-Maria Stanciuc
- Gemma 4: Byte for byte, the most capable open models. Four new vision-capable Apache 2 … Simon Willison's Weblog · Simon Willison
- Google unveils Gemma 4, expands lightweight open model lineup for developers The Economic Times
- Google Jumps Back Into the Open Source AI Race With Gemma 4 Decrypt · Jose Antonio Lanz
- Microsoft and Google Launch New AI Models Thurrott · Laurent Giret
- Google's new Gemma 4 models bring complex reasoning skills to low-power devices SiliconANGLE · Mike Wheatley
- Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones ZDNET · David Gewirtz
- Google announces open Gemma 4 model with Apache 2.0 license 9to5Google · Abner Li
- It's great to see continued momentum behind open models! — Gemma 4, released today, can be experienced as NVIDIA Build APIs or downloaded as a NIM … Kari Ann Briski
- What I'm seeing on the front lines of private markets — 1. OpenAI Codex and Claude Code are in a dead heat to capture the engineering market … Ed Brandman
- Why would Google release such a capable model as Gemma 4, as open source? (meaning, you can download it for free and use it for free if you have the computing power to do so). … Eric Fraser
- Gemma4 is out. Maybe I'm a bit biased but I do think these models are fantastic. Now download them and have fun this weekend! — https://lnkd.in/... Ravin Kumar
- AI is moving from cloud dependency to local sovereignty (edge as they say in the tech world). Google launched Gemma4 today. It is an Open model. … M Mohan
- We've just released Gemma 4!! — Very strong models, small for you to run on your own hardware!! — AMA — (this is why I've been silent for so long!!!) … Gus Martins
- Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License Slashdot · BeauHD
- Google unveils Gemma 4 as its most advanced open AI model for reasoning and agentic tasks crypto.news · Rony Roy
- Gemma 4 — Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter Google DeepMind
- Your next Android flagship may get a big Gemini Nano 4 boost Digital Trends · Paulo Vargas
- Google Unveils Gemma 4: Next-Gen Open AI Model with Autonomous Agent Capabilities Blockonomi · Oliver Dale
- Google Releases Gemma 4 Open Models Under Apache 2.0 License WinBuzzer · Markus Kasanmascheff
- I'm excited to share what my team has been working on: the Gemma 4 Developer Preview for Android is live! — Grab the preview, test the ML Kit Prompt API, and let me know what you built. … Ján Švehla
- Google releases Gemma 4 open models Hacker News
- Gemma 4 explained: How Google is bringing AI to more developers The Indian Express
- Google announces Gemma 4, its most powerful open source model Moneycontrol · Sarthak Singh
- Google gives enterprises new controls to manage AI inference costs and reliability InfoWorld
- Google launches open model Gemma 4 CGTN
Discussion
-
@demishassabis
Demis Hassabis
on x
Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
-
@clementdelangue
Clem
on x
You can run Gemma 4 100% locally in your browser thanks to HF transformers.js. That means 100% private and 100% free! @xenovacom created a demo for it here: https://huggingface.co/... [image]
-
@rasbt
Sebastian Raschka
on x
Flagship open-weight release days are always exciting. Was just reading through the Gemma 4 reports, configs, and code, and here are my takeaways: Architecture-wise, besides multi-model support, Gemma 4 (31B) looks pretty much unchanged compared to Gemma 3 (27B). Gemma 4 [image]
-
@kimmonismus
@kimmonismus
on x
A 12-month time difference between Gemma 3 27b and Gemma 4 31b. The jump is absolutely enormous. Just look at the evaluations between the two models. GPQA doubled, AIME 2026 went from ~20% to ~90%, and so on. Crazy. [image]
-
@matvelloso
Mat Velloso
on x
Apache 2.0!! 👀
-
@artificialanlys
@artificialanlys
on x
Google has released Gemma 4, a new family of multimodal open-weight models including Gemma 4 E2B, Gemma 4 E4B, Gemma 4 31B and Gemma 4 26B A4B @GoogleDeepMind's new Gemma 4 family introduces four multimodal models supporting text, image, and video inputs. We evaluated Gemma 4 [im…
-
@clattner_llvm
Chris Lattner
on x
Google Deep Mind's impressive fully-open Gemma 4 is live day-zero on Modular Cloud. Modular provides the fastest performance on NVIDIA Blackwell and AMD MI355X, thanks to MAX and Mojo🔥. The team took this impressive new model to production inference in days.🚀
-
@googledeepmind
@googledeepmind
on x
Meet Gemma 4: our new family of open models you can run on your own hardware. Built for advanced reasoning and agentic workflows, we're releasing them under an Apache 2.0 license. Here's what's new 🧵 [image]
-
@xenovacom
@xenovacom
on x
NEW: Google releases Gemma 4, their most capable open models yet! 🤯 Apache-2.0, multimodal (text, image, and audio input), and multilingual (140 languages)! They can even run 100% locally in your browser on WebGPU. Watch it describe the Artemis II launch! 🚀 Try the demo! 👇 [video…
-
@googleai
@googleai
on x
Today, we're launching Gemma 4, our most intelligent open models to date. Built with the same breakthrough technology as Gemini 3, Gemma 4 brings advanced reasoning to your personal hardware and devices. Here's what Gemma 4 unlocks for developers: — Intelligence-per-parameter: [v…
-
@google
@google
on x
We just released Gemma 4 — our most intelligent open models to date. Built from the same world-class research as Gemini 3, Gemma 4 brings breakthrough intelligence directly to your own hardware for advanced reasoning and agentic workflows. Released under a commercially [video]
-
@jeffdean
Jeff Dean
on x
Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up
-
@arena
@arena
on x
Gemma-4-31B is now live in Text Arena - ranking #3 among open models (#27 overall), matching much larger models at 10× smaller scale! A significant jump from Gemma-3-27B (+87 pts).
-
@artificialanlys
@artificialanlys
on x
Gemma 4 31B (Reasoning) is very token efficient, using ~1.2M tokens on the GPQA Diamond evaluation, fewer than peers models such as Qwen3.5 27B (~1.5M) and Qwen3.5 35B A3B (~1.6M) [image]
-
@ggerganov
Georgi Gerganov
on x
Let me demonstrate the true power of llama.cpp: - Running on Mac Studio M2 Ultra (3 years old) - Gemma 4 26B A4B Q8_0 (full quality) - Built-in WebUI (ships with llama.cpp) - MCP support out of the box (web-search, HF, github, etc.) - Prompt speculative decoding The result: [vide…
-
@arena
@arena
on x
Gemma 4 by @GoogleDeepMind debuts at 3rd and 6th on the open source leaderboard, making it the #1 ranked US open source model. By total parameter count, Gemma 4 31B is 24× smaller than GLM-5 and 34× smaller than Kimi-K2.5-Thinking, delivering comparable performance at a [image]
-
@clementdelangue
Clem
on x
So happy to see Google release Gemma 4 today in apache 2.0 that gives you frontier capabilities locally. You can use it right away in all your favorite open agent platforms like openclaw, opencode, pi, Hermes by asking it to change your model to local gemma 4 with [image]
-
@mweinbach
Max Weinbach
on x
Gemma 4 E2B on iPhone 17 Pro Max in AI Edge Gallery! [video]
-
@nvidiaaidev
@nvidiaaidev
on x
🙌 Congrats @GoogleDeepMind and teams on the release of your @googlegemma 4 models!🎉 The new multimodal and multilingual models are built for fast, efficient, and secure AI across devices - and optimized to run locally on NVIDIA RTX, RTX PRO, DGX Spark, and Jetson. 👉 [image]
-
@qualcomm
@qualcomm
on x
With day-zero enablement on @Snapdragon, we're delivering Gemma 4 to developers early, unlocking fast access to next-gen on-device AI. Congrats Qualcomm and @googlegemma teams on moving open-source AI forward.
-
@erikvoorhees
Erik Voorhees
on x
Upgrade your @openclaw to Gemma 4, now available for all agents, with total privacy, at https://venice.ai/api Pairs nicely with x402 support, also launched in Venice today
-
@googlegemma
@googlegemma
on x
💻Code Generation: Gemma 4 supports high-quality offline code, turning your workstation into a powerful, local-first AI code assistant.
-
@demishassabis
Demis Hassabis
on x
Available now under Apache 2.0 license in @GoogleAIStudio or download the model weights from @HuggingFace, @Kaggle and @Ollama. 400M downloads and 100K variants to date, Gemma goes from strength to strength. More info: https://blog.google/...
-
@docker
@docker
on x
Gemma 4 is now on Docker Hub! Gemma 4 supports a wide range of applications, and Docker Hub - which hosts a curated set of AI models, packaged as OCI artifacts - makes it simple to pull and run. Learn more: https://www.docker.com/...
-
@mweinbach
Max Weinbach
on x
The Gemma4 models are by far the best smaller sized open models It's not even close in terms of model behavior
-
@itspaulai
Paul Couvert
on x
Gemma 4 is even more impressive than it seems This new E4B is MUCH better than the previous 27B version... While being 6x smaller 🤯 So you've a model running on your phone that is superior to what you could run on a high-end computer 1 year ago. Even the E2B is insane. [image]
-
@hamandcheese
Samuel Hammond
on x
China has incentives to create powerful, small open models because they are cut-off from American AI chips. Google has incentives to create powerful, small open models so they can serve AI overviews on billions of search queries without tortious hallucinations. USA! 🇺🇲
-
@androiddev
@androiddev
on x
Gemma 4 brings the next gen of on-device AI to Android. Get code-assistance in @AndroidStudio and build intelligent experiences that run locally using the ML Kit GenAI Prompt API. Start building agentic experiences on-device → http://android-developers.googleblog.co m/ ... [image…
-
@kaggle
@kaggle
on x
Now available on Kaggle: Gemma 4 🤖 In partnership with @GoogleDeepMind, we're launching the Gemma 4 Good Hackathon. Use multimodal power and native function calling to solve real-world challenges in health, education and climate. $200K Prize Pool Final Submission: May 18, 2026
-
@simonw
Simon Willison
on x
Pelicans for Gemma 4 E2B, E4B, 26B-A4B and 31B - the first three generated on my laptop via LM Studio, the 31B was broken on my laptop so I ran it via the Gemini API instead https://simonwillison.net/... [image]
-
@lmsysorg
@lmsysorg
on x
🎉 Congrats on the Gemma 4 launch from @googlegemma, day-0 support is now live in SGLang! Gemma 4 is a multimodal family (4 sizes: E2B, E4B, 26B A4B, and 31B) with both Dense and MoE architectures, built for everything from mobile to server-scale: 👁️ Rich multimodal [image]
-
@mayhem4markets
@mayhem4markets
on x
New Google Gemma 4 AI models just dropped 🔥 > 31B Dense + 26B MoE — competitive with GPT-4 class > Mobile versions with real-time vision/audio > 256K context > Autonomous agents with native tool use > Apache 2.0 license Build your own coding assistant. No API required. [image]
-
@dorialexander
Alexander Doria
on x
Ok that's the best news of Gemma release so far and I guess this comes with much needed clarification for synthetic data reuse. Synth pipelines need fully open generators.
-
@sriramk
Sriram Krishnan
on x
Really excited for this launch of Gemma 4 from @demishassabis and the DeepMind team. Open source models are a key front for the west to have a lead on and this is a very key addition to the effort. Excited to see what developers in SV and around the world can build using this.
-
@mweinbach
Max Weinbach
on x
Google's Gemma E2B/E4B are still the most interesting model architecture to me Especially with multimodal audio input as well as photo [image]
-
@googlegemma
@googlegemma
on x
Meet Gemma 4! Purpose-built for advanced reasoning and agentic workflows on the hardware you own, and released under an Apache 2.0 license. We listened to invaluable community feedback in developing these models. Here is what makes Gemma 4 our most capable open models yet: 👇 [ima…
-
@triswarkentin
Tris Warkentin
on x
Gemma 4 is here! Performance that beats top open models at 10-20x smaller size. One truly amazing achievement: these are the first Gemma models to achieve state-of-the-art coding and agentic capabilities as well. We are excited to see what you build with them!
-
@nvidiarobotics
@nvidiarobotics
on x
Congrats to the @GoogleGemma team on your launch of Gemma 4. Jetson developers can now run these new multimodal, multilingual models at the edge—from Jetson Orin Nano all the way up to Jetson Thor—to cut latency, manage costs, and keep sensitive data secure on device. Whether
-
@dynamicwebpaige
@dynamicwebpaige
on x
🙌 The future is open-source models!!
-
@osanseviero
Omar Sanseviero
on x
Gemma 4 is here! 🧠 31B and 26B A4B for models with impressive intelligence per parameter 🤏E2B and E4B for mobile and IoT 🤗Apache 2.0 🤖Base and IT checkpoints available Available in AI Studio, Hugging Face, Ollama, Android, and your favorite OS tools 🚀Download it today! [image]
-
@natolambert
Nathan Lambert
on x
Google dropped 4 different Gemma open-weight models! I'm most excited that they're finally adopting a standard Apache 2.0 open source license. This'll massively boost adoption. The standard of better licenses was set by mostly Chinese open model labs, and now labs in the U.S. [im…
-
@googleaidevs
@googleaidevs
on x
Welcome to the family, @GoogleGemma 4️⃣ Gemma 4 are our most intelligent open models designed to run efficiently on every device, engineered to give developers total control over their deployments. [video]
-
@minchoi
Min Choi
on x
This is wild. Google just dropped Gemma 4. Apache 2.0, open weights, frontier models that run on phones, laptops, and desktops👇 [video]
-
@kaggle
@kaggle
on x
Ready to build with Gemma 4? Following the Gemma 4 Good hackathon kickoff, we've added the Gemma 4 26B and 31B models to Kaggle Benchmarks! Experiment with their multimodal capabilities by handling text and images. See how they perform on your custom evaluation sets.
-
@clementdelangue
Clem
on x
this is Gemma 4 running locally on a 3 year old mac meaning: - free (=$0 no matter how much you use) - safe (you're not leaking all your data via unsafe APIs) - fast (as you can see)
-
@gregisenberg
Greg Isenberg
on x
thinking about google's gemma 4 and what it means a few months ago running something this capable locally meant serious hardware and serious tradeoffs on quality now it runs on your laptop, works offline on your phone (!!!), speaks 140 languages natively, 256k context window,
-
@arm
@arm
on x
Real-time assistance. Seamless communication. Greater personalization. On-device AI with Gemma 4, built on Arm. 💪
-
@atomic_chat_hq
@atomic_chat_hq
on x
Running Hermes agent Locally with Gemma4 Device: Macbook Air CPU: M4 RAM: 16GB Open Source. Free. Private. With TurboQuant cache in @Atomic_Chat_HQ app [video]
-
@rseroter
Richard Seroter
on x
Gemma 4! Our most intelligent family of open models, with a commercially-permissive Apache 2 license, that you can use on the server, edge, or desktop. It's a reasoning model that's multimodal (including audio!) and supports tool use. Blog: https://blog.google/... Available on [i…
-
@jacalulu
Jaclyn Konzelmann
on x
Gemma 4 has arrived. 💎 State-of-the-art open models in 4 sizes, featuring native function-calling, multimodal edge AI (vision/audio), and up to 256K context. Open, flexible, and ready to run anywhere. 💻✨
-
@osanseviero
Omar Sanseviero
on x
As part of the Gemma 4 release, we're launching Agent Skills: an Android app experience where you can import different skills and have Gemma 4 E2B reason and use the skills! Running entirely in the phone, available in the Google PlayStore. Try it now! [video]
-
@kimmonismus
@kimmonismus
on x
Here we go: Gamma 4 released: ""Outperforms models 20x its size" Google dropped Gemma 4 under Apache 2.0, full open-source, big licensing shift. Built on Gemini 3 tech, four sizes: E2B, E4B, 26B MoE, 31B Dense. Price-performance: 31B is #3 open model on Arena AI, 26B MoE is #6 [i…
-
@mweinbach
Max Weinbach
on x
New Gemma 4 models! 4 of them, Gemma 4 E2B & E4B for mobile Gemma 4 26B (MoE model!) & 31B for laptop/GPUs I'm going to try these out quite a bit more this afternoon
-
@officiallogank
Logan Kilpatrick
on x
Introducing Gemma 4, our series of open weight (Apache 2.0 licensed) models, which are byte for byte the most capable open models in the world! Gemma 4 is build to run on your hardware: phones, laptops, and desktops. Frontier intelligence with a 26B MOE and a 31B Dense model! [im…
-
@jeffdean
Jeff Dean
on x
Today we're releasing Gemma 4, our new family of open foundation models, built on the same research and technology as our Gemini 3 series. These models set a new standard for open intelligence, offering SOTA reasoning capabilities from edge-scale (2B and 4B w/ vision/audio) up
-
@cyb3rops
Florian Roth
on x
Gemma 4 outperforms all other open source models in my cyber security related benchmark set
-
@googleoss
@googleoss
on x
Autonomy, Control, Clarity: Gemma 4 models are now under the industry-standard Apache 2.0 license. We're committed to empowering developers and researchers in the #OpenSource #AI space. Learn more about the next chapter of the #Gemmaverse! https://goo.gle/...
-
@measure_plan
@measure_plan
on x
i spent the afternoon experimenting with Gemma 4's vision capabilities made an app that uses roboflow RF-DETR for a first pass of object detections and Gemma to summarize the scene in one sentence for fun i asked Gemma to “describe what you see as if you were a medieval bard” [vi…
-
@sundarpichai
Sundar Pichai
on x
Gemma 4 is here, and it's packing an incredible amount of intelligence per parameter 👇
-
@mweinbach
Max Weinbach
on x
Oh Gemma 4 This is actually super cool. Early Jinja errors whatever, but this is the first time I've EVER seen a google model notice it was in a loop and end itself. That's actually insanely cool. [image]
-
@chanduthota
Chandu Thota
on x
Super excited to introduce Gemma 4 - our most intelligent open models to date. Purpose-built for advanced reasoning and agentic workflows, Gemma 4 delivers an amazing level of intelligence performance per parameter. What makes Gemma 4 special? Many reasons, but here is a short
-
@clmt
Clément Farabet
on x
💎💎💎💎 Huge news today: we're launching #Gemma4! Our most capable open models yet. 🔓 Apache 2.0: Complete flexibility and digital sovereignty 🧠 Advanced Reasoning: Multi-step planning and deep logic 🛠️ Agentic Workflows: Native support for function-calling and structured [image]
-
@googledevs
@googledevs
on x
Gemma 4 is here! Our most intelligent open models to date, are built on the same world-class research and tech as Gemini 3, and are sized to run and fine-tune efficiently on local hardware. Check out what @GoogleGemma 4 brings to devs: 💎 Advanced Reasoning: Deep logic tasks, [vid…
-
@ai
Anand Iyer
on x
Google appears to be running the Android playbook on inference. Gemma 4 was shipped under Apache 2.0, which is its most permissive open-model license yet. And its smallest model runs frontier-class math on a Raspberry Pi. Meanwhile, Llama hasn't shipped a competitive open mode…
-
@thorwebdev
@thorwebdev
on x
Meet Gemma 4: our most intelligent family of open models yet. 🚀 Built from Gemini 3 research, it delivers massive reasoning and agentic power in a footprint small enough to run locally! We're releasing it under Apache 2.0 so you can deploy state-of-the-art AI anywhere! 🥳 [video]
-
@kakatohesss
Mathieu Leclercq
on x
🚀 Gemma 4 is switching to Apache 2.0, and it's a total game-changer for indie - 26B/31B locally on a laptop -> agent workflows, code, multimodal (text/audio/vision) - 2B/4B ultra-lightweight-> runs directly on a smartphone - Local-first -> Zero cloud bills, 100% data
-
@_philschmid
Philipp Schmid
on x
Gemma 4 is here! 4⃣Our most capable, agentic open model, built on the same research as Gemini 3. ✨ Reasoning. Multimodal. Four sizes (2B to 31B). Base + Instruct. Released under Apache 2.0. Runs on your phone, laptop, or servers. 🧵↓ [image]
-
@googledevs
@googledevs
on x
Go beyond chatbots. Build your next AI agent on your own device. 🤖 Use #GoogleAIEdge to bring Gemma 4's power to the edge. Experiment in the Gallery app or deploy to any device - laptop, mobile, IoT - via LiteRT-LM. What will you build? Learn how: https://developers.googleblog.co…
-
@googlecloudtech
@googlecloudtech
on x
Introducing Gemma 4 on Google Cloud: Our most capable open models yet. With massive context windows up to 256K, native vision + audio processing, and fluency in 140+ languages, they excel at complex logic, offline code generation, and agentic workflows → https://cloud.google.com/…
-
@teksedge
David Hendrickson
on x
🚨 Gemma 4 is released. Open Source and ready to run on your RTX card, Mac Studio or Strix Halo PC. [image]
-
@nvidiarobotics
@nvidiarobotics
on x
Exciting news for Jetson developers 🎉 Gemma 4 is now on Jetson. @GoogleGemma's latest multimodal, multilingual models run across the full Jetson platform—from Orin Nano to Thor—bringing on-device AI to robotics, edge, and embedded systems. Cut latency, manage costs, and keep
-
@conorbronsdon
Conor Bronsdon
on x
Gemma 4 is launched & live on @Modular Cloud with the fastest inference performance in the industry on both NVIDIA B200 and AMD MI355X 🥳 Day zero - and we're 15% faster than vLLM while offering the only platform that covers both architectures. Two models, two GPU platforms, [imag…
-
@jocarrasqueira
Joana Carrasqueira
on x
💎 is here! This is an incredible work from the Gemma team! @o_lacombe @gusthema @lucianommartins @GlennCameronjr and others 👏🏼
-
@preethilahoti
Preethi Lahoti
on x
So proud to have led the safety efforts for the Gemma 4 family. 💎 It's incredible to see Gemma 4 31B rivaling giants like GLM-5 and Kimi-K2.5-Thinking at a fraction of the footprint. Huge win for the ecosystem. 🚀 Safety + Efficiency + Open Weights = The future.
-
@scaling01
@scaling01
on x
Gemma-4 31B is insane
-
@o_lacombe
Olivier Lacombe
on x
Say hi to Gemma 4 💎💎💎💎 Built for advanced reasoning and agentic workflows, Gemma 4 delivers amzing performance in a highly efficient package. ✨ 4 sizes: E2B, E4B, 26B (MoE), & 31B 🧠 Complex reasoning & logic 🛠️ Native function-calling 🖼️ Multimodal (Vision/Audio) 🔓 [image]
-
@i
Demis Hassabis
on x
Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use - happy building! […
-
@dynamicwebpaige
@dynamicwebpaige
on x
💎And you can experiment with the Gemma 4 models today, for free, sans local downloads! Just head over to @GoogleAIStudio, test it out for your use cases (including tool calls and multimodal inputs!), then click “Get Code” for inference via the Gemini APIs in Python, TS, more: [vi…
-
@androidstudio
@androidstudio
on x
Gemma 4 is now available in Android Studio! By running Gemma 4 locally on your machine you get access to AI code-assistance that doesn't require an internet connection or an API key for its core operations — so you'll never run out of token quota → http://android-developers.googl…
-
@googlegemma
@googlegemma
on x
git commit -m “bump”
-
@nic_carter
Nic Carter
on x
first vibecoded billion-dollar company? [image]
-
@timkellogg.me
Tim Kellogg
on bluesky
Gemma 4 Day — near-Kimi 2.5 on your laptop — 32B & 26B-A4B — effective 4B & 2B for mobile — Apache 2 — blog.google/innovation-a... [embedded post]
-
r/ollama
r
on reddit
Google's Gemma 4 has been published and is available under Apache 2.0 license
-
r/artificial
r
on reddit
Google has published its new open-weight model Gemma 4. And made it commercially available under Apache 2.0 License
-
r/Bard
r
on reddit
Gemma 4: Byte for byte, the most capable open models
-
r/Android
r
on reddit
Gemma 4: Byte for byte, the most capable open models
-
r/StableDiffusion
r
on reddit
Gemma 4 released!