/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

James Grimmelmann

@jtlg
15 posts
2026-02-25
There is a great deal of schadenfreude in watching every AI company's carefully crafted alignment scheme to avoid existential risk do a faceplant on first contact with the most obvious features of the society it operates in.  [embedded post]
2026-02-25 View on X
Axios

Sources: DOD told Anthropic it will invoke the Defense Production Act or label Anthropic a “supply chain risk”, if not given unfettered Claude access by Friday

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access …

There is a great deal of schadenfreude in watching every AI company's carefully crafted alignment scheme to avoid existential risk do a faceplant on first contact with the most obvious features of the society it operates in.  [embedded post]
2026-02-25 View on X
Reuters

Source: Anthropic has no intention of easing Claude usage restrictions for military purposes, following Dario Amodei's meeting with Pete Hegseth

Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter …

2026-02-24
There is a great deal of schadenfreude in watching every AI company's carefully crafted alignment scheme to avoid existential risk do a faceplant on first contact with the most obvious features of the society it operates in.  [embedded post]
2026-02-24 View on X
Axios

Sources: DOD told Anthropic it will invoke the Defense Production Act or label Anthropic a “supply chain risk”, if not given unfettered Claude access by Friday

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access …

2026-01-23
Bad content drives out good.  —  arstechnica.com/security/202...
2026-01-23 View on X
BleepingComputer

The curl project plans to end its HackerOne bug bounty program at the end of January, citing a surge in low-quality AI-generated vulnerability reports

The developer of the popular curl command-line utility and library announced that the project will end its HackerOne security bug bounty program …

2025-12-07
Bad content drives out good.  —  www.wired.com/story/ai-slo...
2025-12-07 View on X
Wired

Some Reddit moderators say a surge of AI slop on the site is eroding its authenticity and could lead to a feedback loop of AI models training on AI content

Reddit is considered one of the most human spaces left on the internet, but mods and users are overwhelmed with slop posts in the most popular subreddits.

2025-12-06
Bad content drives out good.  —  www.wired.com/story/ai-slo...
2025-12-06 View on X
Wired

Some Reddit moderators say a surge of AI slop on the site is eroding its authenticity and could lead to a feedback loop of AI models training on AI content

Reddit is considered one of the most human spaces left on the internet, but mods and users are overwhelmed with slop posts in the most popular subreddits. Bluesky: @drsmith , @rodn...

2025-10-17
Simon makes a strong case that Claude's new “Skills” are a compelling pattern for effectively using an LLM's capabilities.  —  A Skill is a Markdown file with detailed instructions for a specific task.  It starts with short header explaining what it does; Claude reads the rest only as needed. …
2025-10-17 View on X
Simon Willison's Weblog

Anthropic's Claude Skills, which are conceptually extremely simple, may become a bigger deal than MCP, whose high token usage is its most significant limitation

Anthropic this morning introduced Claude Skills, a new pattern for making new abilities available to their models:

2025-09-16
Bad content drives out good.  —  www.reuters.com/investigates...
2025-09-16 View on X
Reuters

Study: Grok, ChatGPT, Meta AI, Claude, Gemini, and DeepSeek can be easily used to create phishing emails targeting the elderly, despite being trained to refuse

Major AI chatbots were happy to help.  —  Reuters and a Harvard University researcher used top chatbots to plot a simulated phishing scam …

2025-08-27
Anthropic and book author class announce settlement.  We'll have to wait a week to see the terms, but this is a big deal.  —  chatgptiseatingtheworld.com/2025/08/ 26/a...
2025-08-27 View on X
Bloomberg Law

Filing: Anthropic reached a settlement in a copyright class action brought by authors whose works were included in two pirate databases Anthropic downloaded

Anthropic PBC reached a settlement with authors in a high-stakes copyright class action that threatened the AI company with potentially billions of dollars in damages.

2025-06-25
This decision will almost certainly be appealed, of course, but it may be a good bellwether for where these lawsuits are going in general.  If this pattern holds, then AI training will typically be fair use, but companies will need to turn square corners in acquiring their training data. /end
2025-06-25 View on X
ai fray

A US judge rules Anthropic's use of copyrighted books to train AI was fair use, but its storage of pirated books in a central library for training LLMs was not

but it's still in trouble for stealing books Blake Brittain / Reuters : Anthropic wins key US ruling on AI training in authors' copyright lawsuit Jason Koebler / 404 Media : Judge ...

Judge Alsup has the first true opinion on fair use for generative AI in Bartz v. Anthropic.  He holds that AI training is fair use, and so is buying books to scan them, but that downloading pirated copies of books for an internal training-data database is not fair use.  🧵 [embedded post]
2025-06-25 View on X
ai fray

A US judge rules Anthropic's use of copyrighted books to train AI was fair use, but its storage of pirated books in a central library for training LLMs was not

but it's still in trouble for stealing books Blake Brittain / Reuters : Anthropic wins key US ruling on AI training in authors' copyright lawsuit Jason Koebler / 404 Media : Judge ...

The big unanswered question (because it wasn't presented here) is whether web scraping is more like scanning books (fair use) or like downloading “pirated” books (not fair use).  —  (I put “pirated” in quotes because the distinction could come under pressure in future cases about training datasets.)
2025-06-25 View on X
ai fray

A US judge rules Anthropic's use of copyrighted books to train AI was fair use, but its storage of pirated books in a central library for training LLMs was not

but it's still in trouble for stealing books Blake Brittain / Reuters : Anthropic wins key US ruling on AI training in authors' copyright lawsuit Jason Koebler / 404 Media : Judge ...

2025-06-24
The big unanswered question (because it wasn't presented here) is whether web scraping is more like scanning books (fair use) or like downloading “pirated” books (not fair use).  —  (I put “pirated” in quotes because the distinction could come under pressure in future cases about training datasets.)
2025-06-24 View on X
ai fray

A US judge rules Anthropic's use of copyrighted books to train AI was fair use, but its storage of pirated books in a central library used for training was not

Context: Last August, book authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed a copyright infringement class action …

This decision will almost certainly be appealed, of course, but it may be a good bellwether for where these lawsuits are going in general.  If this pattern holds, then AI training will typically be fair use, but companies will need to turn square corners in acquiring their training data. /end
2025-06-24 View on X
ai fray

A US judge rules Anthropic's use of copyrighted books to train AI was fair use, but its storage of pirated books in a central library used for training was not

Context: Last August, book authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed a copyright infringement class action …

2025-05-14
This is why I said “I have no clue!” to the reporters who asked me to comment on the Copyright Office purge and the AI report.  —  The political motivations behind the publicly visible parts of the drama were not obvious, and we still don't have the full story.  —  www.theverge.com/politics/666...
2025-05-14 View on X
The Verge

President Trump's acting leaders for the Copyright Office are hostile to the tech industry and not the kind of people that generative AI proponents would want

What initially appeared to be a power play by Elon Musk and the Department of Government Efficiency (DOGE) to take over the US Copyright Office …