/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

AMD launches Instinct MI300X and MI300A AI accelerators and claims the MI300X delivers up to 1.6x more performance than Nvidia's H100 HGX in inference workloads

Paul Alcorn / Tom's Hardware :

Tom's Hardware Paul Alcorn

Discussion

  • @ryanshrout Ryan Shrout on x
    The @AMD MI300X. 153B transistors. 4 IO die. 8 computer CDNA die. [image]
  • @michaeldell Michael Dell on x
    Together @DellTech and @AMD are revolutionizing AI with our new PowerEdge XE9680 server, featuring AMD Instinct MI300X accelerators, boosting #AI and #GenAI capabilities, offering high-performance, secure, and scalable solutions. 🤝🤖🧠🚀 https://www.dell.com/...
  • @aschilling Andreas Schilling on x
    And here we have the Instinct MI300X with the large carrier silicon on top of the eight XCDs, the visible HBM3 and some additional dummy silicon between two of those. [image]
  • @iancutress @iancutress on x
    MI300X. 2.5D and 3D packaging. 5nm and 6nm. 12-Hi HBM3 stacks. Electrons go brrrrrr [image]
  • @aschilling Andreas Schilling on x
    .@AMDInstinct MI300X: - 4x IOD - 8x XCD - 153 Billion transistors total - 192 GB HBM3 - 5.3 TB/s memory BW #AdvancingAI [image]
  • @benbajarin Ben Bajarin on x
    The @AMD MI300X is positioned as the most advanced AI accelerator in the industry. - 153 billion transistors - most advanced packaging technologies - 2.4x memory capacity - 1.3x more TFLOPS - Competitive with H100 on training - Leading on inference on common LLMs
  • @benbajarin Ben Bajarin on x
    Quite a comparison slide from @AMD of MI300X Instinct platform vs @Nvidia H100 HGX. The ability to compete on training but excel on inference is quite a value proposition. [image]
  • @lisasu Lisa Su on x
    Amazing day advancing #AI with our launch of @AMD Instinct MI300X - the most advanced AI solution in the industry. Huge thanks to our partners who joined us today... together we are building the future of AI. @Microsoft @OracleCloud @Meta @Dell @Supermicro_SMCI @HPE @Lenovo... [i…
  • @patrickmoorhead Patrick Moorhead on x
    The AMD MI300X specs & performance versus @NVIDIA H100 per AMD. Wow @AMD must have done some work to make this a training solution. And inference perf is big, big, big. Partners seeing this? $AMD $NVIDIA $AMDAI [image]
  • @amd @amd on x
    Today, we're excited to launch the AMD Instinct MI300X, the highest-performance accelerator in the world for generative AI. [image]
  • @tomwarren Tom Warren on x
    Microsoft CTO and head of AI Kevin Scott is on stage at AMD's AI event as it unveils a new MI300X AI chip. AMD claims it has equivalent training perf as Nvidia's H100 and better inference for AI workloads. Nvidia's H200 is coming, but all eyes on MI300X pricing and availability […
  • @ctoadvisor Keith Townsend on x
    AMD needs to show movement at the software layer. How do you take a model from @huggingface and run it on a system with MI300X in a seamless manner? Looking forward to understanding the AMD technical story a little better.
  • @ryanshrout Ryan Shrout on x
    Seeing @AMD claim equivalent training performance and 40-60% faster inference for AI workloads versus the @nvidia H100 is impressive. Yes, H200 will fare better, but this is big for AMD to get MI300X in the door. #advancingai [image]
  • @edludlow Ed Ludlow on x
    AMD MI300 key details: 2.4 times memory of Nvidia H100 1.6 memory bandwidth of H100 Equal training performance to H100 AMD says MI300X much faster at running the models (inference) $MSFT will use the MI300 lineup https://www.bloomberg.com/... [image]
  • @danielnewmanuv Daniel Newman on x
    Now we move to software. The availability of competitive AI chips like MI300X is important but software has been the sticky point for a long time. Being able to win developers and deliver performant #AI using open-source tools versus CUDA will play a big part in adoption. $AMD [i…
  • r/hardware r on reddit
    AMD unveils Instinct MI300X GPU and MI300A APU, claims up to 1.6X lead over Nvidia's competing GPUs
  • @amd @amd on x
    We are honored to work with both Microsoft and Meta and are proud to see them integrate AMD Instinct MI300X into their AI infrastructure.
  • @futurebec Mark Beccue on x
    .@AMD ran out some partners that are running MI300X — Microsoft, Oracle, Meta and Dell Technologies. All have been running the platform and shared positive early results. All are looking for better performance wherever they can find it. [image]
  • @joshuaogundu Josh on x
    AMD has taken market share from Intel when it comes to CPUs, curious to see how they do with Nvidia in GPUs
  • @patrickmoorhead Patrick Moorhead on x
    Now @Meta ... -Epyc since 2019 -Instinct since 2020 -ROCm benchmarking, getting better -big Pytorch work - adding MI300X into data enters 🔥 - fastest design to deployment in its history (wow) - big perf gains on LLAMA models on ROCm $AMD [image]
  • r/technology r on reddit
    Meta and Microsoft say they will buy AMD's new AI chip as an alternative to Nvidia's