/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Kass Popper

@foomagemindset
6 posts
2026-03-02
OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
2026-03-02 View on X
The Atlantic

A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
2026-03-02 View on X
Hyperdimensional

The Anthropic-DOD skirmish is the first major public debate on control over frontier AI, and institutions behaved erratically, maliciously, and without clarity

On Anthropic and the Department of War  —  I.  —  A little more than a decade ago, I sat with my father and watched him die.

OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
2026-03-02 View on X
The Verge

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
2026-03-02 View on X
fishbowlification

Frontier AI labs' military usage policies for their AI tools are incoherent, vague, and often change, which allows company leadership to preserve “optionality”

I led the Geopolitics Team at OpenAI for approximately three years and then joined two other teams before deciding to leave in June 2025.

2024-08-30
Of note is that Paul Christiano, head of the US AI Safety Institute, is a former OpenAI employee <revolving door emoji>
2024-08-30 View on X
Bloomberg

The US says OpenAI and Anthropic agreed to give the US AI Safety Institute early access to major new AI models to test and evaluate their capabilities and risks

which Leigh Drogen / @ldrogen : There was no way governments (rightly or wrongly) were going to allow the next steps in AI that are going to fundamentally reshape society without h...

2023-12-11
doxxing was just the first of e/acc's mainstream media troubles, next they had to deal with <checks notes> favorable coverage from the new york times [image]
2023-12-11 View on X
New York Times

A look at the Effective Accelerationism movement, which argues for open sourcing AI tools, says AI's benefits far outweigh its harms, and opposes AI regulation

The eccentric pro-tech movement known as “Effective Accelerationism” wants to unshackle powerful A.I., and party along the way.