/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Yann LeCun admits that Llama 4's “results were fudged a little bit”, and that the team used different models for different benchmarks to give better results

The interview took place in a great restaurant in Paris: Yannick Alléno's Pavyllon. … Bluesky: Rob Delaney / @robdelaney : 💩💩💩 [embedded post] SE Gyges / @segyges : this puts a bunch of us in the awkward position of agreeing with yann's overall point but disagreeing with his entire argument [embedded post] Tom Wallach / @mdwallach : It is comical how apparent this is to anyone who knows anything about the technology which also makes one wonder why the CEOs do these companies do not seem to know anything about this technology [embedded post] Ian Robberson / @microberust : Really cant emphasize enough just how LeCun is at the heart to modern machine learning.  He pioneered the foundational Convolutional Neural Network research back in the 80s/90s, and has stayed in that space for the past half-century.  —  Which is probably also why he's not too shy about his opinion. … Derek B. Johnson / @derekbjohnson : What's really interesting is that I think LLMs will end up being a sort of experimental dry run for how our society will react to technologies than can actually do what LLMs pretend to.  America did not do well this time around, but perhaps we can learn from this period and get another chance. … Elliot / @1t2ls : I like this guy's vibe.  The interview doesn't say much, but I appreciate the realism that the next high impact areas for AI aren't chatbot/LLM related but in industrial applications.  This seems right to me.  [embedded post] Gareth Watkins / @garethwatkins : There is nobody who can accurately describe how an LLM works who can explain how that model will lead to AGI.  [embedded post] Max Kennerly / @maxkennerly : “Superintelligence” 🙄 claims aside, I think he's right.  LLM boosters always start dissembling when you point out that the tech has two critical problems—it scales poorly and routinely generates errors—and nobody has a clue how to fix either.  Both problems appear inherent to the tech itself. … Jacob Weindling / @jakeweindling : All the smartest people in tech say that AGI is a long ways away, while all the marks and scammers in tech think the hallucination bot is God.  That tech is run entirely by the latter and not the former says a lot about that industry.  [embedded post] Sean Carroll / @seanmcarroll : Opinions on “superintelligence” can reasonably differ.  (Personally I think it's a terrible framing that obscures more than it clarifies.)  But I still struggle to comprehend why anyone would think LLMs are the route to it.  [embedded post] @abstracttesseract : “My integrity as a scientist cannot allow me to do this” == “I can excuse manipulation, misinformation, and contributing to a genocide, but I draw the line at promoting the wrong flavor of magic beans” [embedded post] @zeroisanumber : Philosophically speaking, I'm convinced that humans can't program an intelligence smarter than we are, but it's nice to see someone expert in the field agree with me that LLMs are a dead end.  [embedded post] Janine Gibson / @janinegibson.ft.com : hearing from my legal team that i don't *know* he didn't sign an NDA, I have *surmised* it based on his frankness. @freelunch23 : he is frank but what he tells is really no secret @hoon : I think LLMs are good for  — search  — organization  — synthesis (many inputs into a few outputs)  —  But superintelligence isn't one of the things. @theangelofhistory : Yeah I have no idea if super intelligence or even artificial human intelligence is possible or not - but LLM text generators are not either of those.  —  Whatever the negative impacts of these things, it's not going to be “take over the world and destroy humanity” Janine Gibson / @janinegibson.ft.com : Ex-Meta chief AI scientist Yann LeCun has Lunch with the FT and in one of those instances so rare that you know he didn't sign an NDA, says exactly why as.ft.com/r/e503690d-8...  [image] Jesse Felder / @jessefelder : “I'm sure there's a lot of people... who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence.  But I'm not gonna change my mind because some dude thinks I'm wrong... My integrity as a scientist cannot allow me to do this.” www.ft.com/content/e3c4... Steve Kovach / @stevekovach : If LeCun is right about this, 100s of billions have been spent on a fantasy [embedded post] Justin Hendrix / @justinhendrix : Interesting “Lunch with the FT” column on Yann LeCun and his AI “superintelligence” ambitions.  Not sure how much to read into this- may have been what you say after a big French meal and glasses of wine- but is this really what “we” suffer from?  —  giftarticle.ft.com/giftarticle/ ...  [image] Forums: Msmash / Slashdot : ‘Results Were Fudged’: Departing Meta AI Chief Confirms Llama 4 Benchmark Manipulation

Financial Times Melissa Heikkilä

Discussion

  • @luke_metro @luke_metro on x
    lol [Screenshot from FT interview: “You don't tell a researcher what to do,” LeCun told the publication.  “You certainly don't tell a researcher like me what to do."]
  • @pfau David Pfau on x
    I find Yann an incredibly odd character because I almost always disagree with his specific research instincts but I almost always agree with him on the meta level about the importance of basic research, of going your own way, of being contrarian.
  • @gjmcgowan George McGowan on x
    @StefanFSchubert If they did game the benchmarks then that is quite a big black mark against them and Mark was right to be angry
  • @peterwildeford Peter Wildeford on x
    Sounds like Zuckerberg 100% made the right call here
  • @minhsmind Minh Do on x
    @StefanFSchubert Will he ever own that Llama4 legitimately fell behind the competition? Or is he going to blame that on Mark too? I don't get it. He's been singing this tune since ChatGPT3.5 and it just feels like he's covering his own ass at this point. Just move on and do your …
  • @stefanfschubert Stefan Schubert on x
    [image]
  • @stefanfschubert Stefan Schubert on x
    “Zuckerberg placed more pressure on the GenAI unit to accelerate AI development and deployment, which led to a communication breakdown, LeCun says. ... [More details, see beloe] A lot of people have left, a lot of people who haven't yet left will leave.” (FT) [image]
  • @modestproposal1 @modestproposal1 on x
    [image: “...I'm sure there's a lot of people at Meta, including perhaps Alex, who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence...but I'm not gonna change my mind because some dude thinks I'm wrong.  I'm not wrong.  My i…
  • @robdelaney Rob Delaney on bluesky
    💩💩💩 [embedded post]
  • @segyges SE Gyges on bluesky
    this puts a bunch of us in the awkward position of agreeing with yann's overall point but disagreeing with his entire argument [embedded post]
  • @mdwallach Tom Wallach on bluesky
    It is comical how apparent this is to anyone who knows anything about the technology which also makes one wonder why the CEOs do these companies do not seem to know anything about this technology [embedded post]
  • @microberust Ian Robberson on bluesky
    Really cant emphasize enough just how LeCun is at the heart to modern machine learning.  He pioneered the foundational Convolutional Neural Network research back in the 80s/90s, and has stayed in that space for the past half-century.  —  Which is probably also why he's not too sh…
  • @derekbjohnson Derek B. Johnson on bluesky
    What's really interesting is that I think LLMs will end up being a sort of experimental dry run for how our society will react to technologies than can actually do what LLMs pretend to.  America did not do well this time around, but perhaps we can learn from this period and get a…
  • @1t2ls Elliot on bluesky
    I like this guy's vibe.  The interview doesn't say much, but I appreciate the realism that the next high impact areas for AI aren't chatbot/LLM related but in industrial applications.  This seems right to me.  [embedded post]
  • @garethwatkins Gareth Watkins on bluesky
    There is nobody who can accurately describe how an LLM works who can explain how that model will lead to AGI.  [embedded post]
  • @maxkennerly Max Kennerly on bluesky
    “Superintelligence” 🙄 claims aside, I think he's right.  LLM boosters always start dissembling when you point out that the tech has two critical problems—it scales poorly and routinely generates errors—and nobody has a clue how to fix either.  Both problems appear inherent to the…
  • @jakeweindling Jacob Weindling on bluesky
    All the smartest people in tech say that AGI is a long ways away, while all the marks and scammers in tech think the hallucination bot is God.  That tech is run entirely by the latter and not the former says a lot about that industry.  [embedded post]
  • @seanmcarroll Sean Carroll on bluesky
    Opinions on “superintelligence” can reasonably differ.  (Personally I think it's a terrible framing that obscures more than it clarifies.)  But I still struggle to comprehend why anyone would think LLMs are the route to it.  [embedded post]
  • @abstracttesseract @abstracttesseract on bluesky
    “My integrity as a scientist cannot allow me to do this” == “I can excuse manipulation, misinformation, and contributing to a genocide, but I draw the line at promoting the wrong flavor of magic beans” [embedded post]
  • @zeroisanumber @zeroisanumber on bluesky
    Philosophically speaking, I'm convinced that humans can't program an intelligence smarter than we are, but it's nice to see someone expert in the field agree with me that LLMs are a dead end.  [embedded post]
  • @janinegibson.ft.com Janine Gibson on bluesky
    hearing from my legal team that i don't *know* he didn't sign an NDA, I have *surmised* it based on his frankness.
  • @freelunch23 @freelunch23 on bluesky
    he is frank but what he tells is really no secret
  • @hoon @hoon on bluesky
    I think LLMs are good for  — search  — organization  — synthesis (many inputs into a few outputs)  —  But superintelligence isn't one of the things.
  • @theangelofhistory @theangelofhistory on bluesky
    Yeah I have no idea if super intelligence or even artificial human intelligence is possible or not - but LLM text generators are not either of those.  —  Whatever the negative impacts of these things, it's not going to be “take over the world and destroy humanity”
  • @janinegibson.ft.com Janine Gibson on bluesky
    Ex-Meta chief AI scientist Yann LeCun has Lunch with the FT and in one of those instances so rare that you know he didn't sign an NDA, says exactly why as.ft.com/r/e503690d-8...  [image]
  • @jessefelder Jesse Felder on bluesky
    “I'm sure there's a lot of people... who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence.  But I'm not gonna change my mind because some dude thinks I'm wrong... My integrity as a scientist cannot allow me to do this.” www…
  • @stevekovach Steve Kovach on bluesky
    If LeCun is right about this, 100s of billions have been spent on a fantasy [embedded post]
  • @justinhendrix Justin Hendrix on bluesky
    Interesting “Lunch with the FT” column on Yann LeCun and his AI “superintelligence” ambitions.  Not sure how much to read into this- may have been what you say after a big French meal and glasses of wine- but is this really what “we” suffer from?  —  giftarticle.ft.com/giftarticl…