/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Google unveils Gemma 3, the “world's best single-accelerator model”, running on a single GPU, in 1B, 4B, 12B, and 27B sizes, and says it outperforms Llama-405B

Following version 1 in February 2024 and 2 in May, Google today announced Gemma 3 as its latest open model for developers.

9to5Google Abner Li

Discussion

  • @levie Aaron Levie on x
    Wow Google keeps pushing on open weights AI in a big way. Pretty insane that new models today can perform better than leading models 15X the size just a year ago. [image]
  • @reach_vb @reach_vb on x
    Google is BACK!! Welcome Gemma3 - 27B, 12B, 4B & 1B - 128K context, multimodal AND multilingual! 🔥 Evals: > On MMLU-Pro, Gemma 3-27B-IT scores 67.5, close to Gemini 1.5 Pro (75.8) > Gemma 3-27B-IT achieves an Elo score of 133 in the Chatbot Arena, outperforming larger LLaMA 3 [im…
  • @deliprao @deliprao on x
    Highest intelligence compression we have seen in any open model. (Also beats o3-mini). Multimodal. Multilingual. Tool calls. Weights on huggingface. So many reasons to be excited about this! [image]
  • @legit_api @legit_api on x
    poor Gemma 3 is frozen in time 🥺 [image]
  • @sundarpichai Sundar Pichai on x
    Gemma 3 is here! Our new open models are incredibly efficient - the largest 27B model runs on just one H100 GPU. You'd need at least 10x the compute to get similar performance from other models ⬇️ [image]
  • @googledevs @googledevs on x
    Gemma 3 is here! The collection of lightweight, state-of-the-art open models are built from the same research and technology that powers our Gemini 2.0 models 💫 → http://blog.google/... [video]
  • @sam_paech Sam Paech on x
    Gemma-3-27b takes second place in creative writing. Expecting this be another favourite with creative writing & RP fine tuners. [image]
  • @willhawkins3 Will Hawkins on x
    🚀 ShieldGemma 2 is now out! We're launching a 4B parameter image safety classifier built on the wonderful new Gemma 3 to help open model developers & users flexibly build their safety requirements into systems. Read more, and access links below! https://developers.googleblog.com/…
  • @btibor91 Tibor Blaho on x
    Happy Gemma 3 Day to those who celebrate [image]
  • @flavioad Flavio Adamo on x
    just tried the new Gemma 3 (27B) on the Hexagon Bouncing Ball test, it didn't go great but understandable for its size Soon enough, small models will be just as good as today's best [video]
  • @thexeophon @thexeophon on x
    Gemma rundown: - 1B, 4B, 12B, 27B models - multimodal (all but 1B, LlaVa-style, SigLIP) - 128K ctx, 5:1 local:global attn - Gemma license (less restrictive than Llama) - 14T tokens for 27B, 12T for 12B, 4T for 4B, 2T for 1B - (logit-based) distillation, teacher unknown - RL with …
  • @steren @steren on x
    Introducing Gemma 3. The most capable model you can run on a single GPU. Cloud Run offers 1 GPU per instance, it is a perfect fit. Deploy it in one simple command: [image]
  • @_philschmid Philipp Schmid on x
    Gemma 3 27B IT is multilingual at its best. 🌎 [image]
  • @googledevs @googledevs on x
    @NVIDIAAIDev 🛡 We're also launching ShieldGemma 2: a powerful 4B image safety checker built on Gemma 3. Developers can customize ShieldGemma 2 to suit their safety needs.
  • @film_girl Christina Warren on x
    Gemma 3 is out now!! It's so freaking cool — check out this thread or this blog post https://developers.googleblog.com/ ...
  • @lmarena_ai @lmarena_ai on x
    🎉 Congrats to @GoogleDeepMind on Gemma-3-27B, the newest and one of the strongest open models in Arena! 💠 Top 10 overall - beating out many proprietary models with only 27B parameter 💠 2nd best open model only below DeepSeek-R1 💠 128K context window Check out their blog to [image…
  • @googledevs @googledevs on x
    Our high-performing open models leverage the power of @NVIDIAAIDev GPUs, are available in a range of sizes (1B, 4B, 12B, 27B), and offer the following capabilities: 🔹Faster on-device inference 🔹Support for 140+ languages 🔹Multimodal understanding 🔹128K-token context window
  • @osanseviero Omar Sanseviero on x
    I'm so happy to announce Gemma 3 is out! 🚀 🌏Understands over 140 languages 👀Multimodal with image and video input 🤯LMArena score of 1338! 📏Context window of 128k Available in AI Studio, Hugging Face, Ollama, Vertex, and your favorite OS tools 🚀Download it today! [image]
  • @shiels_ai @shiels_ai on x
    Gemma 3 is yet more proof that 1. On device models are going to be the norm 2. Your toaster will soon be smarter than you Get ready
  • @clmt Clément on x
    Gemma 3 is out! We are focused on bringing you open models with best capabilities while being fast and easy to deploy: - 27B lands an ELO of 1338, all the while still fitting on 1 single H100! - vision support to process mixed image/video/text content - extended context window [i…
  • @gm8xx8 @gm8xx8 on x
    Gemma 3 is a family of open models (1B, 4B, 12B, 27B) designed for efficient, on-device use. They support 140 languages, text and visual reasoning, 128k-token context, function calling, and structured outputs. Quantized versions reduce compute and memory with minimal performance …
  • r/LocalLLM r on reddit
    Google announce Gemma 3 (1B, 4B, 12B and 27B)
  • r/Bard r on reddit
    Google announces Gemma 3 as 'world's best single-accelerator model'
  • @tdavchev Todor Davchev on x
    Super excited to finally be able to share some of the really exciting work we have been cooking up @GoogleDeepMind! Interactivity, Dexterity, Generalization and Multi-Embodiment seem far less far-fetched than before! Reach out if this excites you too! https://deepmind.google/...
  • @googledeepmind @googledeepmind on x
    Meet Gemini Robotics: our latest AI models designed for a new generation of helpful robots. 🤖 Based on Gemini 2.0, they bring capabilities such as better reasoning, interactivity, dexterity and generalization into the physical world. 🧵 https://goo.gle/... [video]
  • @googledeepmind @googledeepmind on x
    Gemini Robotics can solve multi-step tasks that require significant dexterity, such as folding origami 📄 packing a lunch box 🥗 and more. See it in action ↓ [video]
  • @sundarpichai Sundar Pichai on x
    We've always thought of robotics as a helpful testing ground for translating AI advances into the physical world. Today we're taking our next step in this journey with our newest Gemini 2.0 robotics models. They show state of the art performance on two important benchmarks -