SK hynix says it has begun mass production of the 192GB SOCAMM2, a next-gen LPDDR5X low-power DRAM module designed particularly for Nvidia's Vera Rubin
SK hynix Inc. said Monday it has begun mass production of a next-generation memory module designed for artificial intelligence servers …
Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3 LPUs, 128GB of SRAM, and 40 PBps SRAM bandwidth, available in H2 2026
Nvidia announced Monday at GTC 2026 that its new Groq-based inference server rack will be available alongside the Vera Rubin NVL72 rack …
Nvidia announces the Nvidia Groq 3 LPX, an inference server rack featuring 256 Groq 3 LPUs and 128GB of on-chip SRAM, available in H2 2026
Nvidia announced Monday at GTC 2026 that its new Groq-based inference server rack will be available alongside the Vera Rubin NVL72 rack …
Nvidia launches the Vera Rubin platform, saying it will offer dramatic reductions in inference and training costs compared to Blackwell, across six new chips
The Rubin GPU boasts five times more AI training compute power than Blackwell. … Nvidia is kicking off 2026 …