/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Nvidia unveils the GH200 Grace Hopper Superchip, a combination GPU and CPU relying on high-bandwidth memory 3, or HBM3e, expected to enter production in Q2 2024

Bloomberg Ian King

Discussion

  • @timsweeneyepic Tim Sweeney on x
    NVIDIA's architecture code names are catching up to our times, from Kepler (1571-1630) now to Grace Hopper (1906-1992) - one of the first programmers, working on the Harvard Mark I during World War II.
  • @beth_kindig Beth Kindig on x
    Nvidia $NVDA just announced a new AI chip configuration, the Grace Hopper Superchip (GH200), which tied together Nvidia's H100 chip with an Nvidia central processor. The GH200, expected to speed up generative AI applications like ChatGPT $MSFT, is expected to be available in Q2..…
  • @tomstokes Tom Stokes on x
    .@nvidia 's new module pairs a 72-core ARM Neoverse V2 CPU with their H100 GPU on a single module CPU and GPU are connected with 900GB/s NVLink interconnect. 7X faster than the normal PCIe Gen5 x16 link. This close integration has more benefits beyond raw bandwidth [image]
  • @ripster47 @ripster47 on x
    🚦 $NVDA Unveils Next-Generation GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing Generative AIWorld's First HBM3e Processor Offers Groundbreaking Memory, Bandwidth; Ability to Connect Multiple GPUs for Exceptional Performance; Easily Scalable Server Design
  • @anshelsag Anshel Sag on x
    Jensen shows off a full GH200 supercomputer at size (ish) and says that yes, it probably will run @Crysis #SIGGRAPH2023 [image]
  • @tomstokes Tom Stokes on x
    An additional NVLink connection allows pairing up with another GH200 CPU+GPU combo, giving even further easy access to memory on peer modules in the same way [image]
  • @tomstokes Tom Stokes on x
    With a normal x86 CPU attached to a GPU via PCIe, the GPU and CPU have separate memory page tables (1st picture), requiring extra steps to share memory. In the GH200, the CPU and GPU, processes share a combined page table between CPU and GPU memory (2nd picture) [image]
  • @iancutress @iancutress on x
    Why are people saying Jensen is announcing the GH200? I thought that's what Computex was. I've had a tab open with the whitepaper with the tech details ever since.
  • @anshelsag Anshel Sag on x
    .@nvidia CEO Jensen Huang announces the GH200, a 72-Core CPU combined with a 4 PFlop Hopper GPU with a whopping 141 GB of HBM3e, 5TB/s of memory bandwidth #SIGGRAPH2023 [image]
  • r/LocalLLaMA r on reddit
    NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip