David Sacks says Anthropic is running a “regulatory capture strategy based on fear-mongering”, in response to Anthropic co-founder Jack Clark's AI policy essay
On Tuesday, White House AI “czar” and venture capitalist David Sacks intensified a frustration that has been building for months.
Bloomberg Dave Lee
Related Coverage
- Trump's AI advisor accuses Anthropic of “regulatory capture” The Decoder · Maximilian Schreiner
- Trump AI Czar Is Trying to Take Down Anthropic AI Gizmodo · AJ Dellinger
- What Apple's new M5 chip means for MacBooks, iPads Fortune · Andrew Nusca
- New AI battle: White House vs. Anthropic Axios · Dan Primack
- Anthropic's latest AI model, Claude Haiku 4.5, doubles down on speed and safety Mashable · Chance Townsend
- I spoke to Anthropic co-founder Jack Clark yesterday, moments after he was accused by White House AI czar David Sacks of … Dave Lee
Discussion
-
@davidsacks
David Sacks
on x
Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.
-
@jackclarksf
Jack Clark
on x
@DavidSacks It's through working with the startup ecosystem that we've updated our views on regulation - and of importance for a federal standard. More details in thread, but we'd love to work with you on this, particularly supporting a new generation of startups leveraging AI.
-
@thezvi
Zvi Mowshowitz
on x
Everyone should read the whole essay for themselves. It was even better in person - you can tell when someone cares deeply and is giving it to you straight, as they see it.
-
@deryatr_
Derya Unutmaz
on x
This is why I choose not to use Anthropic AI models. Although they are quite good & I am sure there are some amazing AI researchers working there, I believe their stance on regulatory capture & fear-mongering is so damaging to AI progress that I cannot indirectly support it. ☹️
-
@sriramk
Sriram Krishnan
on x
On AI safety lobbying: Fascinating to see the reaction on X to @DavidSacks post yesterday especially from the AI safety/EA community. Think a few things are going on (a) the EA/ AI safety / “doomer” lobby was natural allies with the left and now find themselves out of power.
-
@plzbepatient
@plzbepatient
on x
Anyone that Scott “AIDS transmission decriminalizer” Wiener supports should instantly be looked at with extreme scrutiny.
-
@shaughnessy119
Tommy
on x
I feel like the top is in for closed model AI reg capture - OpenAI lawsuit backfiring - White House / Sacks calling out Anthropic for fear mongering - China topping open source AI leaderboards (US needs to innovate here)
-
@dissenter_hi
@dissenter_hi
on x
I've known @jackclarkSF for I guess... 15 years now? I would consider him a best friend. Jack is a very sincere and thoughtful man. I've watched him for the past 5 years grapple with the weight of his reality, I've watched him think through how to go about preparing the
-
@luke_metro
@luke_metro
on x
There has always been some daylight between the influencer/VC crowd and the engineer/researchers in tech, but on the subject of AI regulation it is a complete chasm
-
@bscholl
Blake Scholl
on x
@DavidSacks Regulatory capture is unethical and we need to make it socially unacceptable
-
@shakeelhashim
Shakeel
on x
If we're gonna talk about politicians defending companies...
-
@jackclarksf
Jack Clark
on x
Technological Optimism and Appropriate Fear - an essay where I grapple with how I feel about the continued steady march towards powerful AI systems. The world will bend around AI akin to how a black hole pulls and bends everything around itself. [image]
-
@s_oheigeartaigh
@s_oheigeartaigh
on x
@sriramk @DavidSacks I must acknowledge some good points here: - I think (parts of) AI safety has indeed at points over-anchored on very short timelines and very high p(doom)s - I think it's prob true that forecasting efforts haven't always drawn on a diverse enough set of expert…
-
@jason
@jason
on x
As we discussed on the pod, I'm trying to find the most dangerous things related to AI that we need to potentially regulate, that aren't already covered by existing laws. Only things that comes to mind is children's use of the technology, creating unique bioweapons/etc and of
-
@shakeelhashim
Shakeel
on x
“It is hard to trust policy work when it is clear there is an ideology you are being sold behind it.” Yes, but this is equally true of the other side — notable, for instance, that export control policy shifts in response to Trump's meetings with Jensen Huang.
-
@silvermanjacob
Jacob Silverman
on x
What is the “race we can't afford to lose” with China? What's the endgame? What makes this colossal resource expenditure and surrender of power to authoritarian big tech worth it?
-
@idrawcharts
David Holt
on x
comms fun: “My broad view on a lot of AI safety organizations is they have smart people (including many friends) doing good technical work on AI capabilities but they lack epistemic humility on their biases or a broad range of intellectual diversity in their employee base which
-
@s_oheigeartaigh
@s_oheigeartaigh
on x
Sacks' post irked me, but I must acknowledge some good points here: - I think (parts of) AI safety has indeed at points over-anchored on very short timelines and very high p(doom)s - I think it's prob true that forecasting efforts haven't always drawn on a diverse enough set of
-
@robinhanson
Robin Hanson
on x
“you could easily have someone looking at Pagerank in 1997 and doing a ‘bio risk uplift study’ and deciding Google and search is a threat to mankind. or ‘microprocessor computational safety’ in the 1980s forecasting Moore's law as the chart that leads us to doom.”
-
@jonst0kes
Jon Stokes
on x
Great post. Endorsed. Especially this part. Total lack of humility or awareness of their own blind spots. It's truly something. [image]
-
@davidsacks
David Sacks
on x
“It is hard to trust policy work when it is clear there is an ideology you are being sold behind it.”
-
@deanwball
Dean W. Ball
on x
way to go scott, you really made a difference with your tweet! we are all so proud of you.
-
@dareasmunhoz
Diego Areas Munhoz
on x
This is a pretty remarkable rebuke of a major American AI company by the WH AI Czar. But if you read our item from a few weeks ago you'd know Anthropic has chosen a different path from this admins AI doctrine https://punchbowl.news/...
-
@austinc3301
Agus
on x
The irony of Sacks accusing others of regulatory capture when his network of his people have successfully captured the executive government in service of the pro-AI lobby
-
@rabois
Keith Rabois
on x
@DavidSacks true.
-
@hamandcheese
Samuel Hammond
on x
@DavidSacks Have you considered that Jack is simply being sincere?
-
@rabois
Keith Rabois
on x
@DavidSacks just ignore him.
-
@s_oheigeartaigh
@s_oheigeartaigh
on x
@DavidSacks Nobody would write something that sounds as batshit to normies as this essay does, and release it publicly, unless they actually believed it.
-
@basedbeffjezos
@basedbeffjezos
on x
@DavidSacks Thanks for calling it out like it is, David 🙏
-
@doodlestein
Jeffrey Emanuel
on x
@DavidSacks Whether or not they sincerely believe this stuff or it's a cynical regulatory capture strategy long-game, the net results are the same, which is that it slows us down and helps China overtake us in this critical technology.
-
@humanharlan
Harlan Stewart
on x
@DavidSacks I think that what Jack is expressing here is pretty similar to what you have said about efforts to build human-level AI systems: “AGI is a potential successor species.”
-
@daniel_mac8
Dan Mac
on x
@DavidSacks You may be right about that but it is ignoring half the message of this blog post: [image]
-
@jackclarksf
Jack Clark
on x
@chamath We partner with thousands of startups and are excited to help build out a new commercial ecosystem based on our coding models. On regulation, we agree - this is much better left to the federal government, and we said this when SB53 passed.
-
@jackclarksf
Jack Clark
on x
@DavidSacks It's actually through working with startups we've learned that simple regulations would benefit the entire ecosystem - especially if you include a threshold to protect startups. We outlined how such a threshold could work in our transparency framework. [image]
-
@tszzl
Roon
on x
@DavidSacks it's obvious they are sincere
-
@s_oheigeartaigh
@s_oheigeartaigh
on x
A small handful of Thiel business associates and a16z/Scale AI executives literally occupy every key AI position in USG, from which lofty position they tell us about regulatory capture. I love 2025, peak comedy.
-
@davidsacks
David Sacks
on x
Scott Weiner's rushing to defend Anthropic tells you everything you need to know about how closely they're working together to impose the Left's vision of AI regulation.
-
@buccocapital
@buccocapital
on x
$AMZN shareholders watching David Sacks threaten to destroy Anthropic, the last remaining hope for AWS re-acceleration [image]
-
@sundeep
Sunny Madra
on x
One company is causing chaos for the entire industry.
-
@pmarca
Marc Andreessen
on x
Truth.
-
@ericnewcomer
Eric Newcomer
on x
always projection with this one, impossible to believe someone could have a sincere point of view
-
@scott_wiener
Senator Scott Wiener
on x
This is propaganda. @AnthropicAI understands that you can innovate & build powerful models while also looking out for public health & safety. Your boss's effort to ban states from acting on AI w/o advancing federal protections is all we need to know about the regime's motives.
-
@chamath
Chamath Palihapitiya
on x
I wonder if those that use Claude Code or the umpteen app-crappers that wrap Anthropic realize they are funding their own demise? Only the biggest and well funded companies (like Anthropic) will be able to afford the operational complexity of meeting 50 different sets of state A…
-
@rabois
Keith Rabois
on x
If Anthropic actually believed their rhetoric about safety, they can always shut down the company. And lobby then.
-
r/claudexplorers
r
on reddit
Bloomberg, “Anthropic's AI Principles Make It a White House Target”