/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@nvidiaaidev

@nvidiaaidev
18 posts
2026-02-17
🎊 Kudos to the teams at @Alibaba_Qwen on the launch of Qwen 3.5 with Qwen3.5-397B-A17B. 🙌 Developers can start building today for free: https://build.nvidia.com/... Or download and customize it with NVIDIA NeMo: https://github.com/... [video]
2026-02-17 View on X
Reuters

Alibaba debuts Qwen3.5, a 397B-parameter open-weight multimodal AI model that it says is 60% cheaper to use and 8x better at large workloads than Qwen3

2026-02-16
🎊 Kudos to the teams at @Alibaba_Qwen on the launch of Qwen 3.5 with Qwen3.5-397B-A17B. 🙌 Developers can start building today for free: https://build.nvidia.com/... Or download and customize it with NVIDIA NeMo: https://github.com/... [video]
2026-02-16 View on X
Reuters

Alibaba debuts Qwen3.5, a 397B-parameter open-weight multimodal AI model that it says is 60% cheaper to use and 8x better at large workloads than Qwen3

2026-01-28
🎉 Congrats to @allen_ai on your just released Open Coding Agents. 🏎️ Turbocharged on NVIDIA GPUs, their fully open models are achieving state‑of‑the‑art SWE‑Bench Verified performance for open systems. What sets this model apart is that it is fully open source and easily
2026-01-28 View on X
SiliconANGLE

Ai2 launches Open Coding Agents, starting with SERA, an open-source family that includes 32B and 8B parameter models designed to adapt to private codebases

2026-01-27
🎉 Congrats to @allen_ai on your just released Open Coding Agents. 🏎️ Turbocharged on NVIDIA GPUs, their fully open models are achieving state‑of‑the‑art SWE‑Bench Verified performance for open systems. What sets this model apart is that it is fully open source and easily
2026-01-27 View on X
SiliconANGLE

Ai2 launches Open Coding Agents, starting with SERA, an open-source family that includes 32B and 8B parameter models designed to adapt to private codebases

Artificial intelligence is moving swiftly, changing how developers craft, as code flows ever faster into repositories such as GitHub …

2025-12-15
✨ Meet our new open family of models: @NVIDIA Nemotron 3 Open in weights, data, tools, and training, Nemotron 3 is built for multi-agent apps and features: • An efficient hybrid Mamba‑Transformer MoE architecture • 1M token context for long-term memory and improved reasoning [video]
2025-12-15 View on X
VentureBeat

Nvidia launches Nemotron 3, a family of AI models using a hybrid mixture-of-experts architecture and the Mamba-Transformer design, in 30B, 100B, and ~500B sizes

Nvidia launched the new version of its frontier models, Nemotron 3, by leaning in on a model architecture that the world's …

2025-10-11
Our NVIDIA Blackwell set a high bar in the latest results of @SemiAnalysis_ InferenceMAX benchmarks. This new open source initiative provides a comprehensive methodology to evaluate inference hardware and software performance. Here are 5 key benchmark takeaways that [image]
2025-10-11 View on X
SemiAnalysis

SemiAnalysis launches InferenceMAX, an open-source benchmark that automatically tracks LLM inference performance across AI models and frameworks every night

vendor-neutral suite runs nightly and tracks performance changes over time Tae Kim / Barron's Online : Nvidia Touts Software Advantage in Beating Rivals Like AMD Dion Harris / NVID...

Big shoutout to the @vllm_project team for an exceptional showing in the SemiAnalysis InferenceMAX benchmark on NVIDIA Blackwell GPUs 👏 Built through close collaboration with our engineers, vLLM delivered consistently strong Blackwell performance gains across the Pareto
2025-10-11 View on X
SemiAnalysis

SemiAnalysis launches InferenceMAX, an open-source benchmark that automatically tracks LLM inference performance across AI models and frameworks every night

vendor-neutral suite runs nightly and tracks performance changes over time Tae Kim / Barron's Online : Nvidia Touts Software Advantage in Beating Rivals Like AMD Dion Harris / NVID...

2025-05-06
🏆 With our new Parakeet model (parakeet-tdt-0.6b-v2), we have achieved a new standard for automatic speech recognition (ASR) with an 👀 industry-best 6.05% Word Error Rate on the @HuggingFace Open-ASR-Leaderboard. 🦜 Parakeet V2 takes performance to the next level with [image]
2025-05-06 View on X
VentureBeat

Nvidia launches open-source transcription model Parakeet-TDT-0.6B-V2, topping the Hugging Face Open ASR Leaderboard with a word error rate of 6.05%

High accuracy and optimized performance for transcription in 25 languages Asif Razzaq / MarkTechPost : NVIDIA Open Sources Parakeet TDT 0.6B: Achieving a New Standard for Automatic...

2025-03-19
🎉 Announcing NVIDIA DGX Spark (f.k.a. Project DIGITS). Are you ready to #SparkSomethingBig? 💫 ➡️ https://nvidianews.nvidia.com/ ... #GTC25 [video]
2025-03-19 View on X
The Register

Nvidia updates the DGX Station, begins taking reservations for the DGX Spark box, formerly Project Digits, and unveils the RTX PRO workstation and server GPUs

GTC After a Hopper hiatus, Nvidia's DGX Station returns, now armed with an all-new desktop-tuned Grace-Blackwell Ultra Superchip capable …

🎉 Announcing NVIDIA DGX Spark (f.k.a. Project DIGITS). Are you ready to #SparkSomethingBig? 💫 ➡️ https://nvidianews.nvidia.com/ ... #GTC25 [video]
2025-03-19 View on X
SemiAnalysis

A look at Nvidia's GTC 2025 announcements, including a focus on addressing pre-training and post-training scaling and inference time scaling working in tandem

The Reasoning Token Explosion  —  AI model progress has accelerated tremendously, and in the last six months, models have improved more than in the previous six months.

2025-02-25
Introducing DeepSeek-R1 optimizations for Blackwell, delivering 25x more revenue at 20x lower cost per token, compared with NVIDIA H100 just four weeks ago. Fueled by TensorRT DeepSeek optimizations for our Blackwell architecture, including FP4 performance with state-of-the-art [image]
2025-02-25 View on X
Reuters

Sources: Tencent, Alibaba, ByteDance, and other Chinese companies are ramping up orders for Nvidia's H20 AI chip due to booming demand for DeepSeek's models

Chinese companies are ramping up orders for Nvidia's (NVDA.O) H20 artificial intelligence chip due to booming demand …

2024-11-26
🎵 ✨The world's most flexible sound machine? With text and audio inputs, this new #generativeAI model, named Fugatto, can create any combination of music, voices, and sounds.🎹 Read more in our blog by @RichardKerris ➡️ https://blogs.nvidia.com/... #NVIDIAResearch Note: Some [video]
2024-11-26 View on X
Reuters

Nvidia unveils Fugatto, an AI model for generating music and audio that can also modify voices, trained on open-source data, and weighs whether to release it

an impressive new AI sound model from Nvidia Mandy Dalugdug / Music Business Worldwide : Nvidia unveils AI audio generator ‘Fugatto’ that can produce ‘sounds never heard before’ Su...

2024-08-30
🎉We offer our congrats to @AIatMeta on reaching nearly 350M downloads of Llama. 🦙 From our CEO Jensen Huang: “Llama has profoundly impacted the advancement of state-of-the-art AI. The floodgates are now open for every enterprise and industry to build and deploy custom Llama
2024-08-30 View on X
Reuters

Meta says its Llama models were downloaded almost 350M times, are used by AT&T and others, and usage via cloud providers more than doubled from May to July 2024

we just published a bunch of updates on the adoption we're seeing.  And yes, we have a lot more work to do on dev tools and resources which we're bringing online as quickly as we c...

2024-05-22
✨ Announced at #MSBuild, the latest @microsoft Phi-3 family of SLMs are GPU-optimized with NVIDIA TensorRT-LLM and available as NVIDIA NIM inference microservices that can be deployed anywhere. ➡️ https://blogs.nvidia.com/... [image]
2024-05-22 View on X
VentureBeat

Microsoft announces the general availability of its Phi-3 models, including Phi-3-Silica, a 3.3B parameter model that will be embedded on all Copilot+ PCs

here's what you can use it for Pradeep Viswav / MSPoweruser : Microsoft and Khan Academy announce AI partnership Kevin Okemwa / Windows Central : Microsoft ships Azure AI Studio in...

2024-04-10
Announced today, CodeGemma is accelerated by NVIDIA TensorRT-LLM on RTX GPUs. #AIonRTX ➡️ https://catalog.ngc.nvidia.com/ ... Download the optimized int4 checkpoints that bring powerful yet lightweight coding capabilities in 7B and 2B pretrained variants that specialize in code completion...
2024-04-10 View on X
Google for Developers Blog

Google adds Gemma variants CodeGemma, for code completion and generation tasks, and RecurrentGemma, to offer researchers faster inference at higher batch sizes

In February we announced Gemma, our family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models.

2023-10-21
🎉Just released: Eureka!, a new AI agent that uses LLMs to automatically generate algorithms to train robots to accomplish complex tasks. 👀 The #NVIDIAResearch paper includes the AI algorithms and how to experiment with Eureka using NVIDIA Isaac Gym. 👇 https://blogs.nvidia.com/... [video]
2023-10-21 View on X
VentureBeat

Nvidia Research announces Eureka, an AI agent powered by GPT-4 that autonomously writes reward algorithms to teach robots to perform complex skills like a human

Sharon Goldman / VentureBeat :

2023-09-12
Just announced - NVIDIA TensorRT-LLM supercharges large language model #inference on NVIDIA H100 Tensor Core GPUs. #LLM https://developer.nvidia.com/ ...
2023-09-12 View on X
CRN

Nvidia claims TensorRT-LLM will double the H100's performance for running inference on leading LLMs when the open-source library arrives in NeMo in October

Dylan Martin / CRN :

2018-01-04
Thanks @nytimes for writing about one of our recent @NVIDIA Research #AI projects! Using a single Tesla P100 GPU and #GANs, the team generated photorealistic pictures of fake celebrities. http://nvda.ws/2CztMYD pic.twitter.com/BojNrtHQc6
2018-01-04 View on X
New York Times

How Nvidia researchers generated pictures of faces that appear to be real but aren't by analyzing photos of celebrities and detecting patterns

To create the final image in this set, the system generated 10 million revisions over 18 days.  —  The woman in the photo seems familiar. Tweets: @alexsteffen , @pnhoward , @rafael...