OpenAI unveils policy proposals for a world with superintelligence: higher taxes on capital gains, a public AI investment fund, bolstered safety nets, and more
ChatGPT maker put out policy proposals so consumers benefit from rapid advancements in artificial intelligence
Wall Street Journal Amrith Ramkumar
Related Coverage
- OpenAI Advocates Electric Grid, Safety Net Spending for New AI Era Bloomberg
- Behind the Curtain: Sam's superintelligence New Deal Axios
- South Korea's $8.5B securities giant reportedly eyeing crypto exchange takeover deal DL News · Tim Alper
- OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption Business Insider · Tom Carter
- OpenAI published a 13-page document called “Industrial Policy for the Intelligence Age” — a blueprint for how governments should reorganize the economy around AI. … Michael Kokin
- Sam Altman published OpenAI's industrial policy blueprint this week. (See link in comments). Thirteen pages on how to keep people first during the transition to superintelligence. … Vinod K.
- The company that could make your job obsolete just published a report saying they're worried about making your job obsolete. How reassuring. … Jabran Chaudhry
- OpenAI's Altman releases blueprint for taxing, regulating artificial intelligence The Hill · Miranda Nazzaro
- Industrial Policy for the Intelligence Age: Ideas to Keep People First OpenAI
- OpenAI Recommends AI Policy Social Safety Nets MediaPost · Laurie Sullivan
- Industrial policy for the Intelligence Age OpenAI
- OpenAI calls for robot taxes, a public wealth fund, and a four-day week The Next Web · Ana Maria Constantin
- OpenAI's vision for the AI economy: public wealth funds, robot taxes, and a four-day work week TechCrunch · Rebecca Bellan
- OpenAI CEO urges U.S. to prepare for AI ‘superintelligence’ risks and gains CoinDesk · Francisco Rodrigues
- OpenAI calls for robot taxes, public wealth fund to cushion AI job losses Quartz · Colleen Cabili
- Less work, equal pay: OpenAI lays out its vision for a world reshaped by superintelligence The Decoder · Matthias Bastian
- OpenAI releases policy proposals aimed at addressing fallout from AI-driven job losses Yahoo Finance · Daniel Howley
- Proud of the policy roadmap OpenAI published today. If advanced AI creates enormous value, the goal can't just be more growth in the abstract. … David Robinson
- OpenAI Releases Its Vague Vision for Reorganizing Society Around Superintelligence Gizmodo · Bruce Gil
- OpenAI Touts 4-Day Work Week, Wealth Fund to Sell Public on Next-Gen AI PCMag · Michael Kan
- Surviving SuperIntelligence: 6 Things OpenAI Says We Need To Do Now Forbes · John Koetsier
- OpenAI Urges New Economic Rules for the AI Era PYMNTS.com
- If you're thinking about AI only as a tech issue, you're missing the bigger shift. — OpenAI just published a new piece on industrial policy in the intelligence age. … Viviana Jordan
- OpenAI Calls for Global Shift in Taxation, Labor Policy as AI Takes Over Decrypt · Jason Nelson
- OpenAI moves to shape AI policy debate eMarketer · Grace Harmon
- OpenAI just published 13 pages on industrial policy for the AI age. — The proposals are more serious than you might expect. … Adrian Brown
- Sam Altman says AI superintelligence is so big that we need a ‘New Deal’—critics say OpenAI's policy ideas are a cover for ‘regulatory nihilism’ Fortune · Sharon Goldman
- Introducing the OpenAI Safety Fellowship OpenAI
- “The problem is Sam Altman”: OpenAI Insiders don't trust CEO Ars Technica · Ashley Belanger
- Introducing the OpenAI Safety Fellowship OpenAI
Discussion
-
@mikeallen
Mike Allen
on x
🚨🚨@sama tells me he feels such URGENCY about the power of coming AI models that @OpenAI is unveiling a New Deal for superintelligence - ideas to wake up DC He says AI will soon be so mindbending that we need a new social contract 👇Altman's top 6 ideas https://www.axios.com/...
-
@marypcbuk
Mary Branscombe
on bluesky
Bit late for April Fools [embedded post]
-
@andrewcurran_
Andrew Curran
on x
OpenAI has written a new policy proposal ‘Industrial Policy for the Intelligence Age: Ideas to Keep People First.’ They propose the creation of a Public Wealth Fund that will provide American citizens with an automatic public stake in AI companies and AI infrastructure even if [i…
-
@anton_d_leicht
Anton Leicht
on x
Taken seriously, something like this is the best direction for accelerationist policy. OpenAI is asking policymakers to build a world that can handle the speed they're planning to move at; deployment absorption instead of development friction. But there's a good and bad version
-
@_nathancalvin
Nathan Calvin
on x
Appreciate that this recent “Industrial Policy for the Intelligence Age” doc is more frank about safety risks than many other OpenAI global affairs docs I've previously seen. As always though, I'll believe it when the attacks on Alex Bores from their Superpac stop [image]
-
@garymarcus
Gary Marcus
on x
1. The more Sam's finances don't add up, the hypier he gets. 2. But he's right that a massive cyberattack is likely imminent. (See my January 2025 @politico essay for why.)
-
@mjnblack
Julia Black
on x
It begins! OpenAI just released the document I was tipped off about a couple of weeks ago, (very softly) proposing higher taxes on capital gains, a new Public Wealth Fund, “efficiency dividends,” and a four-day workweek. https://cdn.openai.com/...
-
@gavinpurcell
Gavin Purcell
on x
looks like someone finally got the memo for much, much better public narratives hopefully not too little too late [image]
-
@kimmonismus
@kimmonismus
on x
Holy moly: Sam Altman told Axios in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract. - It's on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression. […
-
@kimmonismus
@kimmonismus
on x
Looks like OpenAI reached Superintelligence. OpenAI: “Now, we're beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI.” OpenAI just published a 13-page policy blueprint for the “Intelligence [im…
-
@carnage4life
Dare Obasanjo
on bluesky
OpenAI's shared proposals for how governments should handle AI disruption: — 1. Shift taxes from wages to corporate and capital gains. — 2. Explore four-day workweeks with full pay. — 3. Create a public AI investment fund for citizens to get upside from the AI boom. — 4. …
-
@adrienle
Adrien Ecoffet
on x
Proud to have been part of this. We outline policy ideas for the transition to superintelligence, to build an open economy where everyone benefits and a society that is resilient to the risks. Progress is fast, and we must navigate these issues urgently. https://openai.com/...
-
@noahpinion
Noah Smith
on x
The heads of the big AI labs continue to insist that their products are going to take all your jobs, and also pose various catastrophic risks
-
@jeremyslevin
Jeremy Slevin
on x
OpenAI just put out a policy paper announcing their support for a 32-hour work week with no loss in pay and expanded Social Security, Medicare and Medicaid. Now they just need to stop spending hundreds of millions of dollars to defeat candidates who run on these policies! [image]
-
r/accelerate
r
on reddit
Sam Altman Told Axios That Superintelligence Is So Close & So Disruptive That America Needs A New Social Contract.
-
@adrienle
Adrien Ecoffet
on x
@_NathanCalvin Totally reasonable to be skeptical. For what it's worth this was my first involvement in a policy project and my role was to lead a group of researchers who suggested many of these proposals and gave extensive feedback on all of them. I realize that at this stage t…
-
@_nathancalvin
Nathan Calvin
on x
Currently the correct lens of viewing this document is as a cynical comms document that doesn't represent OpenAI's actual influence on policy/politics. I agree with Anton that if it wasn't a cynical comms doc then that would be good. OAI - take costly actions to prove me wrong!
-
@chup.blakereid.org
Blake E. Reid
on bluesky
OpenAI's “industrial policy” doc is a helpful roadmap for the tropes they are about to flood the zone with (including via “research” grants) to influence law and policy. Look out for stuff like the “Right to AI,” “democratization,” “public-private collaboration,” “open economy,”…
-
@davidcrespo
@davidcrespo
on bluesky
not exactly surprising if you keep an eye on these things, but amusing to read OpenAI formally advocating for a sovereign wealth fund funded by higher capital gains and corporate taxes openai.com/index/ indust... [image]
-
@martyswant
Marty Swant
on x
This news comes hours after @NewYorker published its investigation detailing the various ways AI experts warn OpenAI hasn't been taking AI safety seriously enough. [image]
-
@markchen90
Mark Chen
on x
We're excited to launch the OpenAI Safety Fellowship - supporting rigorous, independent research on AI safety and alignment, including areas like evaluation, robustness, and scalable mitigations. Applications are open through May 4, 2026!
-
@clairekart
Claire Kart
on x
this is the most vibecoded response to long form investigative journalism ever it's a type form application for a program starting in 6 months gg @RonanFarrow
-
@lang__leon
Leon Lang
on x
Interesting that fellows are hosted at Constellation in Berkeley.
-
@deredleritt3r
Prinz
on x
@RonanFarrow For those who are not interested in falling for this obvious bait, here is some actual information about OpenAI's safety practices: 1. OpenAI has a comprehensive Preparedness Framework in place, which is used to track and respond to critical AI safety risks. It's ava…
-
@mikeallen
Mike Allen
on x
👀 I asked @sama why people should trust HIM to be at the forefront of AI's powers “I think almost everybody involved in our industry feels the gravity of what we're doing ... We also think it's very important that no one person is making the decisions by themselves” [video]
-
@ronanfarrow
Ronan Farrow
on x
This announcement arrives hours after our investigation ( https://www.newyorker.com/...) described how OpenAI dissolved its superalignment and AGI-readiness teams and dropped safety from the list of its most significant activities on its IRS filings—and how, when we asked to spea…
-
@thezvi
Zvi Mowshowitz
on x
Do you remember when he previously got asked this same question of why people should trust him, and instead of a PR speech he straight up said 'you shouldn't'?
-
@tomekkorbak
Tomek Korbak
on x
OpenAI is spinning up an AI safety research fellowship program similar to MATS or Anthropic Fellows. People should apply!
-
@openai
@openai
on x
Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent. https://openai.com/...
-
@tenobrus
@tenobrus
on x
really seems like OpenAI PR reps are deeply uneducated on the research activities of their own company. [image]
-
@_nathancalvin
Nathan Calvin
on x
As anyone who follows me knows, I have many criticisms of OpenAI (especially on the policy/lobbying side), but their technical AI safety work remains similarly impressive and deep compared to their peers at GDM and Anthropic (though all of them need to do much better!), and a lot
-
@tszzl
Roon
on x
the alignment team continues to exist and is one of the largest and most compute rich research programs at OpenAI (i am on it, I should know). specific teams dissolving usually has more to do with people than functions relatively new blog: https://alignment.openai.com/