/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Sayash Kapoor

@sayashk
5 posts
2025-04-21
How will AI impact the economy? Can we defend against misuse? What policies would mitigate the risks of AI? Thrilled to share that @random_walker and I are writing another book to tackle these questions! Today, we release a paper laying out our argument: AI as Normal Technology. [image]
2025-04-21 View on X
Knight First Amendment Institute

A deep dive into AI as a normal technology vs. a humanlike intelligence and how major public policy based on controlling superintelligence may make things worse

An alternative to the vision of AI as a potential superintelligence  —  We articulate a vision of artificial intelligence (AI) as normal technology. Bluesky: @taumuyi , @knightcolu...

2024-09-25
📣AI SNAKE OIL is out today! Writing this book over the last two years has been a labor of love. @random_walker and I are very excited to hear what you think of it, and we hope you pick up a copy. Some reflections on the process of writing the book 🧵 [image]
2024-09-25 View on X
Wired

An interview with Arvind Narayanan and Sayash Kapoor on their new book AI Snake Oil, which is based on their popular newsletter about AI's shortcomings

A New Book by 2 Princeton University Computer Scientists X: Eric Topol / @erictopol : Is #AI snake oil? Some of it is, as asserted by @random_walker and @sayashk in a new book publ...

2024-03-06
AI company policies like account bans and legal threats can chill independent evaluation. Today, we are releasing a paper and an open letter (signed by 100+) calling for safe harbors for independent evaluation. Letter: https://sites.mit.edu/... Paper: https://sites.mit.edu/...
2024-03-06 View on X
Washington Post

More than 100 top AI researchers sign an open letter imploring AI companies to provide a legal and technical safe harbor for researchers to study their products

Tech company policies have put a chill on independent AI research, says open letter  —  More than 100 top artificial intelligence …

2023-10-19
Foundation models have profound societal impact, but transparency about these models is waning. Today, we are launching the Foundation Model Transparency Index, which offers a deep dive into the transparency practices and standards of key AI developers. https://crfm.stanford.edu/fmti/ [image]
2023-10-19 View on X
New York Times

Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%

https://www.nytimes.com/...  [image] Mark Coggins / @coggins@mastodon.social : This is the kind of needed AI regulation—requiring model makers to reveal how they trained their lang...

2023-03-22
GPT-4 memorizes coding problems in its training set. How do we know? @random_walker and I prompted it with a Codeforces problem title. It outputs the exact URL for the competition, which strongly suggests memorization. https://twitter.com/...
2023-03-22 View on X
AI Snake Oil

OpenAI may have tested GPT-4 on its training data, violating the cardinal rule of ML, and GPT-4's exam performance says little about its real-world usefulness

OpenAI may have tested on the training data.  Besides, human benchmarks are meaningless for bots.