/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Meta announces its next-generation Meta Training and Inference Accelerator chips for AI training, and says MTIA v1 and the new chips are both now in production

Our very own Nicolaas Viljoen is featured in this blog post. … Tanmay Zargar : Unveiling what we've been working on for the past few years - the next generation of MTIA Inference platform.  We're just getting started. … See also Mediagazer

The Verge Emilia David

Discussion

  • @paul_rietschka Paul Rietschka on threads
    I think they have the money to throw at things, and the entirety of their ML/AI efforts are pure Zuckerberg vanity projects.  This isn't a company with a cloud platform like Google or Microsoft, but it **is** a company that wants to pretend it's in the same league and not a diffe…
  • @paul_rietschka Paul Rietschka on threads
    Stockholders should be asking why a company like Meta needs to build its own silicon.  Because there's no business case.  And all this on top of the fact the past tense of “to lead” is “led.”
  • @benbajarin Ben Bajarin on x
    Assuming the same arch YoY this is a RISC-V based accelerator. “designed for Meta's AI workloads”
  • @iancutress @iancutress on x
    @SquashBionic @Meta 1.35 MHz up from 800 MHz. 800 MHz is going to be a lot more efficient, and you're moving out of the efficiency window quite a lot, even with the arch/node change
  • @rao_hacker_one Arun Rao on x
    Meta is not in the chip-selling or chip-renting business, but we are getting more vertically integrated to better serve our app users & businesses, and to make the best open source AI models and tools freely available to the world. Kudos to the MTIA team!
  • @mikeyanderson Mikey Anderson on x
    It's good that more chips are being created. A diverse ai ecosystem is a healthy one.
  • @aiatmeta @aiatmeta on x
    This new MTIA chip can deliver 3.5x the dense compute performance & 7x the sparse compute performance of MTIA v1. Its architecture is fundamentally focused on providing the right balance of compute, memory bandwidth & memory capacity for serving ranking & recommendation models. […
  • @iancutress @iancutress on x
    Looks like @Meta is talking about next-gen MTIA already. ➡️ 90W, TSMC N5 ➡️ 256 MB SRAM, 2.7 TB/sec ➡️ 128GB LPDDR5, 204.8 GB/sec ➡️ 2.35B transistors, 1.35 GHz (up from 800 MHz) ➡️ 354 TF INT8 GEMM ➡️ 2 chips/board, 12 boards/system ➡️ 3x perf vs Gen1 https://ai.meta.com/... [im…
  • @isidentical Batuhan Taskaya on x
    nice. still so much to go but this is pretty good (and day zero full on torch support is so good)
  • @iamadifuchs Adi Fuchs on x
    Interesting. MTIAv2 has a small die and 90W TDP (typical training accelerators are ~350-500W / 700-1000W for MCM) and about 1/3 of H100's TFLOPs, so could be an overall win, maybe something like Google's approach of scaling out many small TPUs. Nonetheless, great times for chips!
  • @squashbionic @squashbionic on x
    Interesting chip, but there's a 3.6x power consumption increase for ~3x performance improvement(25w->90w) ? Am I seeing that correct?
  • @thetechbrother @thetechbrother on x
    pictured: @ylecun conducting cutting-edge AI research [image]
  • @__tinygrad__ @__tinygrad__ on x
    If you want to build your own training accelerator, you must have your own NN framework with adoption. Meta happens to have one, so this chip might work.
  • @aiatmeta @aiatmeta on x
    Introducing the next generation of the Meta Training and Inference Accelerator (MTIA), the next in our family of custom-made silicon, designed for Meta's AI workloads. Full details ➡️ https://ai.meta.com/... [image]
  • @soumithchintala Soumith Chintala on x
    Meta announces 2nd-gen inference chip MTIAv2. * 708TF/s Int8 / 353TF/s BF16 * 256MB SRAM, 128GB memory * 90W TDP. 24 chips per node, 3 nodes per rack. * standard PyTorch stack (Dynamo, Inductor, Triton) for flexibility Fabbed on TSMC's 5nm process, its fully programmable via the.…