/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Interviews with Sam Altman and 100+ people on if he can be trusted amid allegations of consistent lying and more: some defend him as others call him a sociopath

Slack messages and H.R. documents, some photographed on a cellphone to avoid detection on company devices.  One memo begins with a list: “Sam exhibits a consistent pattern of...” The first item is “Lying.”  Separately, Dario Amodei—who left to co-found Anthropic—kept years of private notes on Altman and Brockman.  More than 200 pages of related documents, never before publicly disclosed, have circulated in Silicon Valley.  In one document, Amodei writes that Altman's “words were almost certainly

New Yorker

Discussion

  • @ronanfarrow Ronan Farrow on x
    (🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private [video…
  • @arthurb Arthur B. on x
    If there were solid, credible, counter arguments to existential risk from ASI, Altman would be aware of them and present those. “Vibes” is the best he has to offer. [image]
  • @ronanfarrow Ronan Farrow on x
    (2/11) In the fall of 2023, OpenAI's chief scientist, Ilya Sutskever, acting at the behest of fellow board members and with other concerned colleagues, compiled some 70 pages of memos about Altman and his second-in-command, Greg Brockman—Slack messages and H.R. documents, some ph…
  • @packym Packy McCormick on x
    Also, TBPN should bring on Ronan Farrow and Andrew Marantz today.
  • @packym Packy McCormick on x
    If I'm Anthropic, I'm locking Dario in a room for like a month and not letting him near the internet, a camera, or a microphone. Just let OpenAI look weird by themselves for a while. Probably, though, he'll pen a NYT editorial warning that AI will steal your girl. [image]
  • @buccocapital @buccocapital on x
    Paul Graham, 18 years ago: “You could parachute Sam Altman into an island full of cannibals and come back in 5 years and he'd be the king” At this point I think you should stop being surprised he'll do whatever it takes to try to win.
  • @danprimack Dan Primack on x
    For example, so much of this was in @_KarenHao book.
  • @krishnanrohit Rohit on x
    Something I find missing from these discussions is, sure yes they make it sound like everyone thought he was untrustworthy. So why did like 99% of the OpenAI team quit after he was fired and agitate for him to come back? Seems like an important piece of evidence.
  • @mattzeitlin Matthew Zeitlin on x
    How are we supposed to “align” superintelligence if the people who are building the thing keep on getting outwitted by the intelligent and ambitious — but human — Sam Altman
  • @bigmeaninternet Malcolm Harris on x
    Appreciate @jackclarkSF pointing to the real driver here, wish the risk-concerned industry seemed more interested in a critical understanding of this, more than all the sci-fi stuff [image]
  • @dylanbyers Dylan Byers on x
    I adore The New Yorker, always will, but reporting out the previously reported for a different audience is certainly one of the genres over there.
  • @katiemiller Katie Miller on x
    After reading this piece on Sam Altman, one can reasonably conclude he's put profit over loyalty, principles, and company governance. There's business savvy and ruthlessness, and there's Sam, who at multiple points in his career has been the subject of investigations and forced
  • @kakashiii111 @kakashiii111 on x
    This is a terrifyingly detailed article on Sam Altman's personality.  If you look at Sam's behavior over the past two years, it's hard to ignore the pattern: consistent lies, misleading disclosures, inflated active user statistics, including but not limited to a spree of hundreds…
  • @pkafka Peter Kafka on x
    On the one hand, the New Yorker profile of Sam Altman does a good of spelling out that many people who have worked with him do not trust him. On the other hand, there have been some clues. https://www.businessinsider.com/ ... [image]
  • @paulg Paul Graham on x
    Since there's yet another article claiming that we “removed” Sam because partners distrusted him, no, we didn't. It's not because I want to defend Sam that I keep insisting on this. It's because it's so annoying to read false accounts of my own actions.
  • @garymarcus Gary Marcus on x
    Sam Altman in a nutshell, @newyorker: [image]
  • @samfbiddle Sam Biddle on x
    Looking forward to TBPN's robust discussion of this reporting
  • @ronanfarrow Ronan Farrow on x
    (3/11) The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. Mira Murati, who had given Sutskever material for his memos, said: “We need institutions worthy…
  • @buccocapital @buccocapital on x
    I have it on good authority that Anthropic employees have Dario tied up in the basement. He is trying to chew through the rope so he can tell the press that AI will destroy the economy, but they've got him down there until OpenAI finishes destroying itself
  • @danprimack Dan Primack on x
    Read the New Yorker piece about @sama on plane ride back to Boston. Outside of a few specific quotes from Dario notes, not sure there was anything in there that hadn't been previously reported.
  • @highyieldharry @highyieldharry on x
    Bill Gurley hearing investors might want to oust Sam Altman [image]
  • @aisafetymemes @aisafetymemes on x
    It's confirmed. Multiple sources. OpenAI proposed enriching itself by playing China, Russia, and the US against each other, starting a bidding war. “What if we sold it to Putin?” OpenAI is not pro-America, they're pro-OpenAI They're spending unprecedented sums to buy Congress [im…
  • @nkulw Noah Kulwin on x
    What I appreciated most about this piece is the extent to which it shows people in the upper ranks of the AI corps are bag-chasing liars. Almost no one stood by their principles when a billion dollars came knocking, and I think it's bc those principles were weak to begin with
  • @michhuan Michael Huang on x
    Sam Altman (2015): “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” OpenAI representative (2026): “What do you mean by ‘existential safety’? That's not, like, a thing.” [image]
  • @mikeisaac Rat King on x
    very nice piece by @RonanFarrow and @andrewmarantz on OpenAI drama of the past few years cannot tell you how many people, three years ago, flat out denied many of the things they're now copping to in this article what a difference a few years makes https://www.newyorker.com/...
  • @alexandermccoy4 Alexander McCoy on x
    Every time someone says we can't regulate AI because “China,” remember: @sama invented that argument in 2017, without evidence. An intelligence official who investigated it told The New Yorker it was “just being used as a sales pitch.” Read more in this bombshell report: 👇👇
  • @shakeelhashim Shakeel on x
    This is a very good, very long piece. Excerpting some of the new/juicy bits (but you should read the whole thing!) [image]
  • @ronanfarrow Ronan Farrow on x
    The reporting on OpenAI and Sam Altman that I've been working on for the past year and a half, for @NewYorker, with @andrewmarantz: https://www.newyorker.com/...
  • @davelevitan Dave Levitan on bluesky
    Just getting started with the New Yorker's big Sam Altman thing but this is a weird sentence that pretty much every editor I've had (and me, also an editor sometimes) would have probably cut or at least argued about.  —  www.newyorker.com/magazine/202...  [image]
  • @stokel Chris Stokel-Walker on bluesky
    You'd do well to read this story - and particularly the tone and tenor of the right to replies in brackets throughout www.newyorker.com/magazine/202...
  • @karlbode.com Karl Bode on bluesky
    nope  —  I think often about how the past OpenAI board said he was an untrustworthy ass with all sorts of dodgy financial conflicts of interests and the tech press pretty broadly framed them all as hyperbolic cranks
  • @petertl Peter Thal Larsen on bluesky
    Almost 11,000 words on Sam Altman and I'm still none the wiser about how OpenAI plans to make money or whether it ever will.  —  www.newyorker.com/magazine/202...
  • @emilynussbaum Emily Nussbaum on bluesky
    Possible problem that the current default setting for “person in charge of globe-rattling technologies” is “sociopath”: www.newyorker.com/magazine/202...
  • @caseynewton Casey Newton on bluesky
    This is petty but maybe my favorite part of the New Yorker's story about OpenAI www.newyorker.com/magazine/202...  [image]
  • @lopatto Elizabeth Lopatto on bluesky
    (Altman does not recall the exchange.)  (Altman doesn't remember this.)  (Altman does not recall this.  Kushner says that they were not in contact at the time.) www.newyorker.com/magazine/202...
  • @miafarrow Mia Farrow on bluesky
    Ronan has worked intensely on this investigation for the past year and a half, dealing with hostility behind the scenes.  There's shrinking space for this kind of reporting that affects our lives-individuals that can acquire the press they want to control.  —  www.newyorker.com/m…
  • @carnage4life Dare Obasanjo on bluesky
    That the premise of this article is “we interviewed 100+ people to determine if Sam Altman is a liar and a sociopath” is wild.  —  The animation of the image in the article is also quite unsettling.
  • @jacobsilverman.com Jacob Silverman on bluesky
    “They'd met nine years prior, late at night in Peter Thiel's hot tub.”  —  www.newyorker.com/magazine/202...
  • @harmancipants Reyhan Harmanci on bluesky
    “He has two traits that are almost never seen in the same person.  The first is a strong desire to please people, to be liked in any given interaction.  The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” www.newyorker.co…
  • @thefarce.org @thefarce.org on bluesky
    They're torn.  Most think he's a terrible sociopath.  The rest thing he's a wonderful sociopath.  [embedded post]
  • @paularmstrongtbd Paul Armstrong on bluesky
    What helpful research.  [embedded post]
  • @nixCraft@mastodon.social @nixCraft@mastodon.social on mastodon
    Sam Altman May Control Our Future: Can He Be Trusted?  —  New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.  —  https://www.newyorker.com/... (archived version https://archive.is/... )  —  Just so you know. …
  • r/neoliberal r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/indepthstories r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/stupidpol r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/OpenAI r on reddit
    OpenAI considered enriching itself by playing China, Russia, and the US against each other, starting a bidding war.  “What if we sold it to Putin?”
  • r/behindthebastards r on reddit
    Sam Altman episode when?
  • r/ChatGPT r on reddit
    New Yorker investigation reveals OpenAI execs discussed selling AI to Russia/China in a bidding war, post-firing probe produced no written report …
  • r/ChatGPTcomplaints r on reddit
    Ronan Farrow published a investigation into Sam Altman and OpenAI today in the New Yorker - Focused on Sams Lies and a Deepdive into his firing from OpenAi in 2023
  • r/JoeRogan r on reddit
    A long article on Sam Altman, with spicy mentions of other guests, like Musk who is apparently spying on Altman.
  • r/UnderReportedNews r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/slatestarcodex r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/TrueReddit r on reddit
    Unmasking Sam Altman - by Ronan Farrow
  • r/BetterOffline r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?  (Ronan Farrow)
  • r/Longreads r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/fusion r on reddit
    In a meeting with the Biden administration, Sam Altman claimed that by 2026 an extensive network of nuclear-fusion reactors across the United States would power the A.I. boom.
  • r/OpenAI r on reddit
    New Yorker published a major investigation into Sam Altman and OpenAI today — based on never-before-disclosed internal memos and 100+ interviews
  • r/technology r on reddit
    18-month New Yorker investigation finds OpenAI's Sam Altman lobbied against the same AI regulations he publicly advocated for …