/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Jeffrey Ladish

@jeffladish
8 posts
2026-02-28
This is a huge mistake. The government has every right to choose not to work with any AI company it doesn't want to do business with. And indeed, I think the government should be demanding far more oversight over what AI companies can and cannot do. This... isn't that. The DoW
2026-02-28 View on X
Anthropic

Anthropic says it'll challenge “any supply chain risk designation in court” and that the designation would only affect contractors' use of Claude on DOD work

Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk.

This is a huge mistake. The government has every right to choose not to work with any AI company it doesn't want to do business with. And indeed, I think the government should be demanding far more oversight over what AI companies can and cannot do. This... isn't that. The DoW
2026-02-28 View on X
@secwar

Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our ...

2024-08-01
These all seem like good things. Glad to see Sam reaffirm the 20% commitment, give early model access to the US AISI, and affirm the norm for current and former employees to speak up re issues
2024-08-01 View on X
Bloomberg

OpenAI tells US lawmakers that it is “dedicated” to “rigorous safety protocols at every stage of our process” and is working with the US AI Safety Institute

OpenAI, responding to questions from US lawmakers, said it's dedicated to making sure its powerful AI tools …

2024-03-09
I don't trust Sam Altman to lead an AGI project. I think he's a deeply untrustworthy individual, low in integrity and high in power seeking It doesn't bring me joy to say this. I rather like Sam Altman. I like his writing, I like the way he communicates clearly, I like how he...
2024-03-09 View on X
OpenAI

OpenAI announces Sam Altman will rejoin its board of directors, alongside three new members: Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo

Dr. Sue Desmond-Hellmann, Nicole Seligman, Fidji Simo join; Sam Altman rejoins board  —  We're announcing three new members to our Board …

2023-12-05
Good essay by Bruce Schneier on trust & AI: https://www.schneier.com/... I think we're going to have agentic AI systems faster than Bruce thinks, so it is pretty important we regulate the creation of AI directly. However, I also think Bruce is right about these immediate risks
2023-12-05 View on X
Schneier on Security

AI chatbots' conversational nature makes them more trustworthy, which could cause problems, like “hidden exploitation”, outright fraud, and mistaken expertise

I trusted a lot today.  I trusted my phone to wake me on time.  I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely.

2023-05-23
OpenAI just wrote up their plans for how they would like to develop superintelligent AI, and why they think we can't stop development right now. I'd summarize their approach as “let's proceed to superintelligence with global oversight https://openai.com/...
2023-05-23 View on X
TechCrunch

OpenAI CEO Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever say the world will likely need a regulatory body for superintelligence

Now is a good time to start thinking about the governance … Stephen E. Arnold / Beyond Search : Please, World, Please, Regulate AI. Oh, Come Now, You Silly Goose Hasan Chowdhury / ...

2023-03-16
Great to see people able to change their mind and own it, especially about really consequential stuff! I hope OpenAI will change their mind about scaling at max speed, especially because the reason it's so dangerous is the same reason that wide open publishing is so dangerous https://twitter.com/...
2023-03-16 View on X
The Verge

Experts criticize OpenAI for not disclosing GPT-4's training data or methods used; OpenAI co-founder says its past approach to openly sharing research was wrong

The system's capabilities are still being assessed, but as researchers and experts pore over its accompanying materials …

2023-03-15
I admit I'm a bit afraid and I don't think that's a bad thing. It's not that GPT-4 is way more powerful than I expected. I loosely expected something similar. But seeing the cognitive jump, I take a step back and look at the trajectory and the compute overhang and I'm scared
2023-03-15 View on X
OpenAI

OpenAI debuts GPT-4, claiming the model “surpasses ChatGPT in its advanced reasoning capabilities”, available in ChatGPT Plus and as an API that has a waitlist

Following the research path from GPT, GPT-2, and GPT-3, our deep learning approach leverages more data and more computation …