/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Amazon VP of EC2 Dave Brown says AWS is considering using AMD's new MI300 chips and declined to use Nvidia's DGX Cloud chips; Oracle is Nvidia's first partner

Amazon Web Services (AMZN.O), the world's largest cloud computing provider, is considering using new artificial intelligence chips …

Reuters Stephen Nellis

Discussion

  • r/AMD_Stock r on reddit
    Exclusive: Amazon's cloud unit is considering AMD's new AI chips
  • @amd @amd on x
    Do you use Instagram, Facebook or WhatsApp? @Meta VP @AlexisBjorlin shared how all your favorite apps are powered by hundreds of thousands of AMD servers, with the next generation of #EPYC beginning deployment this year. So go on and share that meme. #EPYC has you covered. [image…
  • @dylanonchips @dylanonchips on x
    Here it is: fourth-gen EPYC Bergamo for cloud-native applications. Up to 128 Zen 4c cores, each of which are 35% smaller than Zen 4 cores. Aimed at leadership performance-per-watt versus the performance-per-core focus of general-purpose EPYC Genoa. [image]
  • @iancutress @iancutress on x
    I'm the only one that has brought their Genoa delidded CPU with them. Oh well :) Here's @AMD #Bergamo. Left: 96 core (12*8) Genoa Right: 128-core (16*8) Bergamo. Same IO die. Same uArch, resized L3. Same power. Same socket. Same DRAM Support. Higher Efficiency. [image]
  • @bobodtech Bob O'Donnell on x
    Interesting discussion with @AMD's @LisaSu and @Meta regarding their planned use of Bergamo for #GenerativeAI apps and early tests showing a 2.5x improvement over Genoa. [image]
  • @dylanonchips @dylanonchips on x
    AMD revealed the GPU-only Instinct MI300X, which has 192GB of HBM3 memory, beating the 80GB capacity of Nvidia's H100. This will allow users to run large language models on fewer GPUs, giving the MI300X leadership total cost of ownership, Su said. [image]
  • @dylanonchips @dylanonchips on x
    Brad McCredie, AMD's vice president of data center and accelerated processing, explains the building blocks of the Instinct MI300X GPU in response to a question from @CDemerjian. [video]
  • @sasamarinkovic Sasa Marinkovic on x
    MI300X - a powerhouse for processing the largest and most complex of LLMs * 192GB * https://www.anandtech.com/...
  • @ryansmithat Ryan Smith on x
    While the chiplet-based design of the MI300 always left the door open to further CPU/GPU mix & matching, I wasn't sure if AMD was going to do it. But sure enough, they are, with a GPU-only MI300X. The 8 HBM stacks is going to be a big boon for the LLM/GenAI market they're after h…
  • @phatal187 @phatal187 on x
    What are the odds Nvidia will be pushing out a 144GB H100 in Q4? 🤔 https://twitter.com/...
  • @anandtech @anandtech on x
    And finally, AMD will be producing a super-sized, GPU-only version of the MI300 accelerator, the MI300X. Using 24GB HBM3 stacks and 8 CDNA 3 GPU chiplets, the flagship chip will offer 192GB of local memory, ideal for AI/LLM training and other AI workloads https://www.anandtech.co…
  • r/LocalLLaMA r on reddit
    AMD Expands AI/HPC Product Lineup With Flagship GPU-only Instinct Mi300X with 192GB Memory