/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

OpenAI unveils policy proposals for a world with superintelligence: higher taxes on capital gains, a public AI investment fund, bolstered safety nets, and more

ChatGPT maker put out policy proposals so consumers benefit from rapid advancements in artificial intelligence

Wall Street Journal Amrith Ramkumar

Discussion

  • @mikeallen Mike Allen on x
    🚨🚨@sama tells me he feels such URGENCY about the power of coming AI models that @OpenAI is unveiling a New Deal for superintelligence - ideas to wake up DC He says AI will soon be so mindbending that we need a new social contract 👇Altman's top 6 ideas https://www.axios.com/...
  • @marypcbuk Mary Branscombe on bluesky
    Bit late for April Fools [embedded post]
  • @andrewcurran_ Andrew Curran on x
    OpenAI has written a new policy proposal ‘Industrial Policy for the Intelligence Age: Ideas to Keep People First.’ They propose the creation of a Public Wealth Fund that will provide American citizens with an automatic public stake in AI companies and AI infrastructure even if [i…
  • @anton_d_leicht Anton Leicht on x
    Taken seriously, something like this is the best direction for accelerationist policy. OpenAI is asking policymakers to build a world that can handle the speed they're planning to move at; deployment absorption instead of development friction. But there's a good and bad version
  • @_nathancalvin Nathan Calvin on x
    Appreciate that this recent “Industrial Policy for the Intelligence Age” doc is more frank about safety risks than many other OpenAI global affairs docs I've previously seen. As always though, I'll believe it when the attacks on Alex Bores from their Superpac stop [image]
  • @garymarcus Gary Marcus on x
    1. The more Sam's finances don't add up, the hypier he gets. 2. But he's right that a massive cyberattack is likely imminent. (See my January 2025 @politico essay for why.)
  • @mjnblack Julia Black on x
    It begins! OpenAI just released the document I was tipped off about a couple of weeks ago, (very softly) proposing higher taxes on capital gains, a new Public Wealth Fund, “efficiency dividends,” and a four-day workweek. https://cdn.openai.com/...
  • @gavinpurcell Gavin Purcell on x
    looks like someone finally got the memo for much, much better public narratives hopefully not too little too late [image]
  • @kimmonismus @kimmonismus on x
    Holy moly: Sam Altman told Axios in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract. - It's on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression. […
  • @kimmonismus @kimmonismus on x
    Looks like OpenAI reached Superintelligence. OpenAI: “Now, we're beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI.” OpenAI just published a 13-page policy blueprint for the “Intelligence [im…
  • @carnage4life Dare Obasanjo on bluesky
    OpenAI's shared proposals for how governments should handle AI disruption:  —  1. Shift taxes from wages to corporate and capital gains.  —  2. Explore four-day workweeks with full pay.  —  3. Create a public AI investment fund for citizens to get upside from the AI boom.  —  4. …
  • @adrienle Adrien Ecoffet on x
    Proud to have been part of this. We outline policy ideas for the transition to superintelligence, to build an open economy where everyone benefits and a society that is resilient to the risks. Progress is fast, and we must navigate these issues urgently. https://openai.com/...
  • @noahpinion Noah Smith on x
    The heads of the big AI labs continue to insist that their products are going to take all your jobs, and also pose various catastrophic risks
  • @jeremyslevin Jeremy Slevin on x
    OpenAI just put out a policy paper announcing their support for a 32-hour work week with no loss in pay and expanded Social Security, Medicare and Medicaid. Now they just need to stop spending hundreds of millions of dollars to defeat candidates who run on these policies! [image]
  • r/accelerate r on reddit
    Sam Altman Told Axios That Superintelligence Is So Close & So Disruptive That America Needs A New Social Contract.
  • @adrienle Adrien Ecoffet on x
    @_NathanCalvin Totally reasonable to be skeptical. For what it's worth this was my first involvement in a policy project and my role was to lead a group of researchers who suggested many of these proposals and gave extensive feedback on all of them. I realize that at this stage t…
  • @_nathancalvin Nathan Calvin on x
    Currently the correct lens of viewing this document is as a cynical comms document that doesn't represent OpenAI's actual influence on policy/politics. I agree with Anton that if it wasn't a cynical comms doc then that would be good. OAI - take costly actions to prove me wrong!
  • @chup.blakereid.org Blake E. Reid on bluesky
    OpenAI's “industrial policy” doc is a helpful roadmap for the tropes they are about to flood the zone with (including via “research” grants) to influence law and policy.  Look out for stuff like the “Right to AI,” “democratization,” “public-private collaboration,” “open economy,”…
  • @davidcrespo @davidcrespo on bluesky
    not exactly surprising if you keep an eye on these things, but amusing to read OpenAI formally advocating for a sovereign wealth fund funded by higher capital gains and corporate taxes
openai.com/index/ indust...  [image]
  • @martyswant Marty Swant on x
    This news comes hours after @NewYorker published its investigation detailing the various ways AI experts warn OpenAI hasn't been taking AI safety seriously enough. [image]
  • @markchen90 Mark Chen on x
    We're excited to launch the OpenAI Safety Fellowship - supporting rigorous, independent research on AI safety and alignment, including areas like evaluation, robustness, and scalable mitigations. Applications are open through May 4, 2026!
  • @clairekart Claire Kart on x
    this is the most vibecoded response to long form investigative journalism ever it's a type form application for a program starting in 6 months gg @RonanFarrow
  • @lang__leon Leon Lang on x
    Interesting that fellows are hosted at Constellation in Berkeley.
  • @deredleritt3r Prinz on x
    @RonanFarrow For those who are not interested in falling for this obvious bait, here is some actual information about OpenAI's safety practices: 1. OpenAI has a comprehensive Preparedness Framework in place, which is used to track and respond to critical AI safety risks. It's ava…
  • @mikeallen Mike Allen on x
    👀 I asked @sama why people should trust HIM to be at the forefront of AI's powers “I think almost everybody involved in our industry feels the gravity of what we're doing ... We also think it's very important that no one person is making the decisions by themselves” [video]
  • @ronanfarrow Ronan Farrow on x
    This announcement arrives hours after our investigation ( https://www.newyorker.com/...) described how OpenAI dissolved its superalignment and AGI-readiness teams and dropped safety from the list of its most significant activities on its IRS filings—and how, when we asked to spea…
  • @thezvi Zvi Mowshowitz on x
    Do you remember when he previously got asked this same question of why people should trust him, and instead of a PR speech he straight up said 'you shouldn't'?
  • @tomekkorbak Tomek Korbak on x
    OpenAI is spinning up an AI safety research fellowship program similar to MATS or Anthropic Fellows. People should apply!
  • @openai @openai on x
    Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. https://openai.com/...
  • @tenobrus @tenobrus on x
    really seems like OpenAI PR reps are deeply uneducated on the research activities of their own company. [image]
  • @_nathancalvin Nathan Calvin on x
    As anyone who follows me knows, I have many criticisms of OpenAI (especially on the policy/lobbying side), but their technical AI safety work remains similarly impressive and deep compared to their peers at GDM and Anthropic (though all of them need to do much better!), and a lot
  • @tszzl Roon on x
    the alignment team continues to exist and is one of the largest and most compute rich research programs at OpenAI (i am on it, I should know). specific teams dissolving usually has more to do with people than functions relatively new blog: https://alignment.openai.com/