/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

OpenAI announces a Safety Fellowship program for external researchers, engineers, and practitioners to study the safety and alignment of advanced AI systems

A pilot program to support independent safety and alignment research and develop the next generation of talent

OpenAI

Discussion

  • @ronanfarrow Ronan Farrow on x
    This announcement arrives hours after our investigation described how OpenAI dissolved its superalignment and AGI-readiness teams and dropped safety from the list of its most significant activities on its IRS filings—and how, when we asked to speak with researchers, working on ex…
  • @openai @openai on x
    Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. https://openai.com/...
  • @markchen90 Mark Chen on x
    We're excited to launch the OpenAI Safety Fellowship - supporting rigorous, independent research on AI safety and alignment, including areas like evaluation, robustness, and scalable mitigations. Applications are open through May 4, 2026!
  • @wesroth Wes Roth on x
    OpenAI launched the OpenAI Safety Fellowship, a new five-month pilot program aimed at bringing in external researchers, engineers, and practitioners to focus on AI safety and alignment. Running from September 2026 to February 2027, the fellowship prioritizes critical research [im…
  • @tenobrus @tenobrus on x
    really seems like OpenAI PR reps are deeply uneducated on the research activities of their own company. [image]
  • @ilex_ulmus @ilex_ulmus on x
    They know this is how to capture the field— by making everyone in it financially dependent on them or hoping to be. This is why I say technical AI safety is a no-go until we have real governance. #PauseAI
  • @deredleritt3r Prinz on x
    @RonanFarrow For those who are not interested in falling for this obvious bait, here is some actual information about OpenAI's safety practices: 1. OpenAI has a comprehensive Preparedness Framework in place, which is used to track and respond to critical AI safety risks. It's ava…
  • @ronanfarrow Ronan Farrow on x
    (🧵1/11) For the past year and a half, I've been investigating OpenAI and Sam Altman for @NewYorker. With my coauthor @andrewmarantz, I reviewed never-before-disclosed internal memos, obtained 200+ pages of documents related to a close colleague, including extensive private [video…
  • @arthurb Arthur B. on x
    If there were solid, credible, counter arguments to existential risk from ASI, Altman would be aware of them and present those. “Vibes” is the best he has to offer. [image]
  • @ronanfarrow Ronan Farrow on x
    (2/11) In the fall of 2023, OpenAI's chief scientist, Ilya Sutskever, acting at the behest of fellow board members and with other concerned colleagues, compiled some 70 pages of memos about Altman and his second-in-command, Greg Brockman—Slack messages and H.R. documents, some ph…
  • @packym Packy McCormick on x
    Also, TBPN should bring on Ronan Farrow and Andrew Marantz today.
  • @packym Packy McCormick on x
    If I'm Anthropic, I'm locking Dario in a room for like a month and not letting him near the internet, a camera, or a microphone. Just let OpenAI look weird by themselves for a while. Probably, though, he'll pen a NYT editorial warning that AI will steal your girl. [image]
  • @buccocapital @buccocapital on x
    Paul Graham, 18 years ago: “You could parachute Sam Altman into an island full of cannibals and come back in 5 years and he'd be the king” At this point I think you should stop being surprised he'll do whatever it takes to try to win.
  • @danprimack Dan Primack on x
    For example, so much of this was in @_KarenHao book.
  • @krishnanrohit Rohit on x
    Something I find missing from these discussions is, sure yes they make it sound like everyone thought he was untrustworthy. So why did like 99% of the OpenAI team quit after he was fired and agitate for him to come back? Seems like an important piece of evidence.
  • @mattzeitlin Matthew Zeitlin on x
    How are we supposed to “align” superintelligence if the people who are building the thing keep on getting outwitted by the intelligent and ambitious — but human — Sam Altman
  • @bigmeaninternet Malcolm Harris on x
    Appreciate @jackclarkSF pointing to the real driver here, wish the risk-concerned industry seemed more interested in a critical understanding of this, more than all the sci-fi stuff [image]
  • @dylanbyers Dylan Byers on x
    I adore The New Yorker, always will, but reporting out the previously reported for a different audience is certainly one of the genres over there.
  • @katiemiller Katie Miller on x
    After reading this piece on Sam Altman, one can reasonably conclude he's put profit over loyalty, principles, and company governance. There's business savvy and ruthlessness, and there's Sam, who at multiple points in his career has been the subject of investigations and forced
  • @kakashiii111 @kakashiii111 on x
    This is a terrifyingly detailed article on Sam Altman's personality.  If you look at Sam's behavior over the past two years, it's hard to ignore the pattern: consistent lies, misleading disclosures, inflated active user statistics, including but not limited to a spree of hundreds…
  • @pkafka Peter Kafka on x
    On the one hand, the New Yorker profile of Sam Altman does a good of spelling out that many people who have worked with him do not trust him. On the other hand, there have been some clues. https://www.businessinsider.com/ ... [image]
  • @paulg Paul Graham on x
    Since there's yet another article claiming that we “removed” Sam because partners distrusted him, no, we didn't. It's not because I want to defend Sam that I keep insisting on this. It's because it's so annoying to read false accounts of my own actions.
  • @garymarcus Gary Marcus on x
    Sam Altman in a nutshell, @newyorker: [image]
  • @samfbiddle Sam Biddle on x
    Looking forward to TBPN's robust discussion of this reporting
  • @ronanfarrow Ronan Farrow on x
    (3/11) The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. Mira Murati, who had given Sutskever material for his memos, said: “We need institutions worthy…
  • @buccocapital @buccocapital on x
    I have it on good authority that Anthropic employees have Dario tied up in the basement. He is trying to chew through the rope so he can tell the press that AI will destroy the economy, but they've got him down there until OpenAI finishes destroying itself
  • @danprimack Dan Primack on x
    Read the New Yorker piece about @sama on plane ride back to Boston. Outside of a few specific quotes from Dario notes, not sure there was anything in there that hadn't been previously reported.
  • @highyieldharry @highyieldharry on x
    Bill Gurley hearing investors might want to oust Sam Altman [image]
  • @aisafetymemes @aisafetymemes on x
    It's confirmed. Multiple sources. OpenAI proposed enriching itself by playing China, Russia, and the US against each other, starting a bidding war. “What if we sold it to Putin?” OpenAI is not pro-America, they're pro-OpenAI They're spending unprecedented sums to buy Congress [im…
  • @nkulw Noah Kulwin on x
    What I appreciated most about this piece is the extent to which it shows people in the upper ranks of the AI corps are bag-chasing liars. Almost no one stood by their principles when a billion dollars came knocking, and I think it's bc those principles were weak to begin with
  • @michhuan Michael Huang on x
    Sam Altman (2015): “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” OpenAI representative (2026): “What do you mean by ‘existential safety’? That's not, like, a thing.” [image]
  • @mikeisaac Rat King on x
    very nice piece by @RonanFarrow and @andrewmarantz on OpenAI drama of the past few years cannot tell you how many people, three years ago, flat out denied many of the things they're now copping to in this article what a difference a few years makes https://www.newyorker.com/...
  • @alexandermccoy4 Alexander McCoy on x
    Every time someone says we can't regulate AI because “China,” remember: @sama invented that argument in 2017, without evidence. An intelligence official who investigated it told The New Yorker it was “just being used as a sales pitch.” Read more in this bombshell report: 👇👇
  • @shakeelhashim Shakeel on x
    This is a very good, very long piece. Excerpting some of the new/juicy bits (but you should read the whole thing!) [image]
  • @ronanfarrow Ronan Farrow on x
    The reporting on OpenAI and Sam Altman that I've been working on for the past year and a half, for @NewYorker, with @andrewmarantz: https://www.newyorker.com/...
  • @davelevitan Dave Levitan on bluesky
    Just getting started with the New Yorker's big Sam Altman thing but this is a weird sentence that pretty much every editor I've had (and me, also an editor sometimes) would have probably cut or at least argued about.  —  www.newyorker.com/magazine/202...  [image]
  • @stokel Chris Stokel-Walker on bluesky
    You'd do well to read this story - and particularly the tone and tenor of the right to replies in brackets throughout www.newyorker.com/magazine/202...
  • @karlbode.com Karl Bode on bluesky
    nope  —  I think often about how the past OpenAI board said he was an untrustworthy ass with all sorts of dodgy financial conflicts of interests and the tech press pretty broadly framed them all as hyperbolic cranks
  • @petertl Peter Thal Larsen on bluesky
    Almost 11,000 words on Sam Altman and I'm still none the wiser about how OpenAI plans to make money or whether it ever will.  —  www.newyorker.com/magazine/202...
  • @emilynussbaum Emily Nussbaum on bluesky
    Possible problem that the current default setting for “person in charge of globe-rattling technologies” is “sociopath”: www.newyorker.com/magazine/202...
  • @caseynewton Casey Newton on bluesky
    This is petty but maybe my favorite part of the New Yorker's story about OpenAI www.newyorker.com/magazine/202...  [image]
  • @lopatto Elizabeth Lopatto on bluesky
    (Altman does not recall the exchange.)  (Altman doesn't remember this.)  (Altman does not recall this.  Kushner says that they were not in contact at the time.) www.newyorker.com/magazine/202...
  • @miafarrow Mia Farrow on bluesky
    Ronan has worked intensely on this investigation for the past year and a half, dealing with hostility behind the scenes.  There's shrinking space for this kind of reporting that affects our lives-individuals that can acquire the press they want to control.  —  www.newyorker.com/m…
  • @carnage4life Dare Obasanjo on bluesky
    That the premise of this article is “we interviewed 100+ people to determine if Sam Altman is a liar and a sociopath” is wild.  —  The animation of the image in the article is also quite unsettling.
  • @jacobsilverman.com Jacob Silverman on bluesky
    “They'd met nine years prior, late at night in Peter Thiel's hot tub.”  —  www.newyorker.com/magazine/202...
  • @harmancipants Reyhan Harmanci on bluesky
    “He has two traits that are almost never seen in the same person.  The first is a strong desire to please people, to be liked in any given interaction.  The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” www.newyorker.co…
  • @thefarce.org @thefarce.org on bluesky
    They're torn.  Most think he's a terrible sociopath.  The rest thing he's a wonderful sociopath.  [embedded post]
  • @paularmstrongtbd Paul Armstrong on bluesky
    What helpful research.  [embedded post]
  • @nixCraft@mastodon.social @nixCraft@mastodon.social on mastodon
    Sam Altman May Control Our Future: Can He Be Trusted?  —  New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.  —  https://www.newyorker.com/... (archived version https://archive.is/... )  —  Just so you know. …
  • r/neoliberal r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/indepthstories r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/stupidpol r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/OpenAI r on reddit
    OpenAI considered enriching itself by playing China, Russia, and the US against each other, starting a bidding war.  “What if we sold it to Putin?”
  • r/behindthebastards r on reddit
    Sam Altman episode when?
  • r/ChatGPT r on reddit
    New Yorker investigation reveals OpenAI execs discussed selling AI to Russia/China in a bidding war, post-firing probe produced no written report …
  • r/ChatGPTcomplaints r on reddit
    Ronan Farrow published a investigation into Sam Altman and OpenAI today in the New Yorker - Focused on Sams Lies and a Deepdive into his firing from OpenAi in 2023
  • r/JoeRogan r on reddit
    A long article on Sam Altman, with spicy mentions of other guests, like Musk who is apparently spying on Altman.
  • r/UnderReportedNews r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/slatestarcodex r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/TrueReddit r on reddit
    Unmasking Sam Altman - by Ronan Farrow
  • r/BetterOffline r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?  (Ronan Farrow)
  • r/Longreads r on reddit
    Sam Altman May Control Our Future—Can He Be Trusted?
  • r/fusion r on reddit
    In a meeting with the Biden administration, Sam Altman claimed that by 2026 an extensive network of nuclear-fusion reactors across the United States would power the A.I. boom.
  • r/OpenAI r on reddit
    New Yorker published a major investigation into Sam Altman and OpenAI today — based on never-before-disclosed internal memos and 100+ interviews
  • r/technology r on reddit
    18-month New Yorker investigation finds OpenAI's Sam Altman lobbied against the same AI regulations he publicly advocated for …
  • @anton_d_leicht Anton Leicht on x
    Taken seriously, something like this is the best direction for accelerationist policy. OpenAI is asking policymakers to build a world that can handle the speed they're planning to move at; deployment absorption instead of development friction. But there's a good and bad version
  • @adrienle Adrien Ecoffet on x
    Proud to have been part of this. We outline policy ideas for the transition to superintelligence, to build an open economy where everyone benefits and a society that is resilient to the risks. Progress is fast, and we must navigate these issues urgently. https://openai.com/...
  • @_nathancalvin Nathan Calvin on x
    Currently the correct lens of viewing this document is as a cynical comms document that doesn't represent OpenAI's actual influence on policy/politics. I agree with Anton that if it wasn't a cynical comms doc then that would be good. OAI - take costly actions to prove me wrong!
  • @jeremyslevin Jeremy Slevin on x
    OpenAI just put out a policy paper announcing their support for a 32-hour work week with no loss in pay and expanded Social Security, Medicare and Medicaid. Now they just need to stop spending hundreds of millions of dollars to defeat candidates who run on these policies! [image]
  • @tszzl Roon on x
    the alignment team continues to exist and is one of the largest and most compute rich research programs at OpenAI (i am on it, I should know). specific teams dissolving usually has more to do with people than functions relatively new blog: https://alignment.openai.com/
  • @kimmonismus @kimmonismus on x
    Looks like OpenAI reached Superintelligence. OpenAI: “Now, we're beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI.” OpenAI just published a 13-page policy blueprint for the “Intelligence [im…
  • @lang__leon Leon Lang on x
    Interesting that fellows are hosted at Constellation in Berkeley.
  • @adrienle Adrien Ecoffet on x
    @_NathanCalvin Totally reasonable to be skeptical. For what it's worth this was my first involvement in a policy project and my role was to lead a group of researchers who suggested many of these proposals and gave extensive feedback on all of them. I realize that at this stage t…
  • @martyswant Marty Swant on x
    This news comes hours after @NewYorker published its investigation detailing the various ways AI experts warn OpenAI hasn't been taking AI safety seriously enough. [image]
  • @almostmedia Julie Fredrickson on x
    Since everyone else is too afraid I'll spell it out The @OpenAI industrial policy document put out? They butchered Montana's pioneering rights based framework in our “right to compute” law & inserted mealy mouthed blather about access. Generational whiff. Do better Lehane
  • @thezvi Zvi Mowshowitz on x
    Do you remember when he previously got asked this same question of why people should trust him, and instead of a PR speech he straight up said 'you shouldn't'?
  • @tomekkorbak Tomek Korbak on x
    OpenAI is spinning up an AI safety research fellowship program similar to MATS or Anthropic Fellows. People should apply!
  • @garymarcus Gary Marcus on x
    1. The more Sam's finances don't add up, the hypier he gets. 2. But he's right that a massive cyberattack is likely imminent. (See my January 2025 @politico essay for why.)
  • @dylanmatt Dylan Matthews on x
    Interesting, as a matter of corporate strategy, that the first AI econ policy paper OpenAI is putting out as a “starting point for discussion” is just the Bernie platform: higher capital taxes, 32 hour workweek, worker vetoes of automation [image]
  • @clairekart Claire Kart on x
    this is the most vibecoded response to long form investigative journalism ever it's a type form application for a program starting in 6 months gg @RonanFarrow
  • @noahpinion Noah Smith on x
    The heads of the big AI labs continue to insist that their products are going to take all your jobs, and also pose various catastrophic risks
  • @_nathancalvin Nathan Calvin on x
    As anyone who follows me knows, I have many criticisms of OpenAI (especially on the policy/lobbying side), but their technical AI safety work remains similarly impressive and deep compared to their peers at GDM and Anthropic (though all of them need to do much better!), and a lot
  • @_nathancalvin Nathan Calvin on x
    Appreciate that this recent “Industrial Policy for the Intelligence Age” doc is more frank about safety risks than many other OpenAI global affairs docs I've previously seen. As always though, I'll believe it when the attacks on Alex Bores from their Superpac stop [image]
  • @mikeallen Mike Allen on x
    👀 I asked @sama why people should trust HIM to be at the forefront of AI's powers “I think almost everybody involved in our industry feels the gravity of what we're doing ... We also think it's very important that no one person is making the decisions by themselves” [video]
  • @mjnblack Julia Black on x
    It begins! OpenAI just released the document I was tipped off about a couple of weeks ago, (very softly) proposing higher taxes on capital gains, a new Public Wealth Fund, “efficiency dividends,” and a four-day workweek. https://cdn.openai.com/...
  • @gavinpurcell Gavin Purcell on x
    looks like someone finally got the memo for much, much better public narratives hopefully not too little too late [image]
  • @andrewcurran_ Andrew Curran on x
    OpenAI has written a new policy proposal ‘Industrial Policy for the Intelligence Age: Ideas to Keep People First.’ They propose the creation of a Public Wealth Fund that will provide American citizens with an automatic public stake in AI companies and AI infrastructure even if [i…
  • @kimmonismus @kimmonismus on x
    Holy moly: Sam Altman told Axios in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract. - It's on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression. […
  • @mikeallen Mike Allen on x
    🚨🚨@sama tells me he feels such URGENCY about the power of coming AI models that @OpenAI is unveiling a New Deal for superintelligence - ideas to wake up DC He says AI will soon be so mindbending that we need a new social contract 👇Altman's top 6 ideas https://www.axios.com/...
  • @chup.blakereid.org Blake E. Reid on bluesky
    OpenAI's “industrial policy” doc is a helpful roadmap for the tropes they are about to flood the zone with (including via “research” grants) to influence law and policy.  Look out for stuff like the “Right to AI,” “democratization,” “public-private collaboration,” “open economy,”…
  • @davidcrespo @davidcrespo on bluesky
    not exactly surprising if you keep an eye on these things, but amusing to read OpenAI formally advocating for a sovereign wealth fund funded by higher capital gains and corporate taxes
openai.com/index/ indust...  [image]
  • @marypcbuk Mary Branscombe on bluesky
    Bit late for April Fools [embedded post]
  • @carnage4life Dare Obasanjo on bluesky
    OpenAI's shared proposals for how governments should handle AI disruption:  —  1. Shift taxes from wages to corporate and capital gains.  —  2. Explore four-day workweeks with full pay.  —  3. Create a public AI investment fund for citizens to get upside from the AI boom.  —  4. …
  • r/accelerate r on reddit
    Sam Altman Told Axios That Superintelligence Is So Close & So Disruptive That America Needs A New Social Contract.
  • @garethwatkins Gareth Watkins on bluesky
    Here's a different awful thing for you to think about: OpenAI just issued a whitepaper about the future of work.  —  cdn.openai.com/pdf/561e7512...