/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
Company

Groq

Filtered to launch pattern ×
43 articles stable

Groq has appeared in 43 articles since 2017-04. Coverage peaked in 2025Q4 with 13 articles. Frequently mentioned alongside Nvidia, Jonathan Ross, Meta, OpenAI.

Articles
43
mentions
Velocity
-7.7%
growth rate
Acceleration
-1.244
velocity change
Sources
15
publications
The Second Five Hundred Billion
Jensen Huang doubled Nvidia's AI chip forecast: $500 billion through 2026 became $1 trillion through 2027. One year matching all prior years combined. At the sa...

Coverage Timeline

2026-03-17
CRN 87 related

Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3 LPUs, 128GB of SRAM, and 40 PBps SRAM bandwidth, available in H2 2026

Nvidia announced Monday at GTC 2026 that its new Groq-based inference server rack will be available alongside the Vera Rubin NVL72 rack …

2026-03-16
CRN 91 related

Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3 LPUs and 128GB of on-chip SRAM, available in H2 2026

Nvidia announced Monday at GTC 2026 that its new Groq-based inference server rack will be available alongside the Vera Rubin NVL72 rack …

2025-08-06
Wired 51 related

OpenAI releases gpt-oss-120b and gpt-oss-20b, its first open-weight models since GPT-2; the smaller gpt-oss-20b can run locally on a device with 16GB+ of RAM

gpt-oss-120b and gpt-oss-20b push the frontier of open-weight reasoning models Simon Willison / Simon Willison's Weblog : OpenAI's new open weight (Apache 2) models are really good OpenAI on GitHub : ...

2025-07-27
Wall Street Journal

A look at efforts by startups, such as Positron and Groq, to develop inference-focused chips that aim to be more energy efficient and performant than Nvidia's

Christopher Mims / Wall Street Journal :

2024-08-30
Reuters 15 related

Meta says its Llama models were downloaded almost 350M times, are used by AT&T and others, and usage via cloud providers more than doubled from May to July 2024

we just published a bunch of updates on the adoption we're seeing.  And yes, we have a lot more work to do on dev tools and resources which we're bringing online as quickly as we can. https://ai.meta....

2024-03-03
Wired

An interview with Groq's CEO about its AI chips that let chatbots answer queries almost instantly, its cease and desist to X.ai over Groq's trademark, and more

Steven Levy / Wired : X: @groqinc , @evankirstel , @stevenlevy , @glynmoody , and @nobbelty X: @groqinc : With accelerated growth at @GroqInc we're excited to announce the acquisition of @DefinitiveI...

2024-03-02
Wired 1 related

An interview with Groq's CEO about its AI chips that let chatbots answer queries almost instantly, its cease and desist to X.ai over Groq's trademark, and more

AI chips from startup Groq allow chatbots to answer queries almost instantly.  That could open up whole new use cases for generative AI helpers.

2024-02-21
Gizmodo 10 related

Demos from AI chipmaker Groq go viral after the startup's inference engine shows lightning-fast speeds when running LLMs, including for real-time conversations

Two AI companies are claiming the science fiction term, “Grok,” as their own, but only one is turbocharging the AI industry.

Loading articles...

Quarterly Coverage

Top Sources

Narrative

TEXXR tracks 46 Techmeme articles mentioning Groq, dating back to April 2017. The biggest stories include Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3... and Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3.... Frequently covered alongside Nvidia, Jonathan Ross, SRAM, Chamath Palihapitiya, and BlackRock. Coverage has shifted toward enterprise, regulation themes and away from consumer, developer.

Key Moments

2024Q3developer -25pts; consumer -50pts; funding +50pts
2025Q1research +50pts; funding -50pts
2025Q2consumer +33pts; research -50pts; funding +33pts

Relationships

Loading graph...