/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@pandaashwinee

@pandaashwinee
6 posts
2025-11-30
it would be interesting to understand whether the AI reviewers prefer AI writing. my understanding so far is that they do, but i wonder if anyone has looked into this quantitatively?
2025-11-30 View on X
Nature

Pangram Labs: ~21% of the 75,800 peer reviews submitted for ICLR 2026, a major ML conference, were fully AI-generated, and 50%+ contained signs of AI use

By - Miryam Naddaf 0  —  Miryam Naddaf is a science writer based in London.  —  Search author on:  —  PubMed Google Scholar

2025-11-29
it would be interesting to understand whether the AI reviewers prefer AI writing. my understanding so far is that they do, but i wonder if anyone has looked into this quantitatively?
2025-11-29 View on X
Nature

Pangram Labs: ~21% of the 75,800 peer reviews submitted for ICLR 2026, a major ML conference, were fully AI-generated, and 50%+ contained signs of AI use

By - Miryam Naddaf 0  —  Miryam Naddaf is a science writer based in London.  —  Search author on:  —  PubMed Google Scholar

2025-03-20
i think submitting ai papers to a venue without contacting the PCs is bad. Sakana reached out asking whether we would be willing to participate in their experiment for the workshop i'm organizing at ICLR, and i (we) said no. this shows a lack of respect for human reviewers time.
2025-03-20 View on X
TechCrunch

AI startups Intology and Autoscience submitted AI-generated studies at a conference without disclosure and face criticism of co-opting peer review for publicity

Kyle Wiggers / TechCrunch : X: @intologyai , @pandaashwinee , @intologyai , @tuhinchakr , @sakanaailabs , @autoscienceai , @autoscienceai , and @dorialexander X: @intologyai : Zoc...

2024-10-01
i usually would never retweet these corporate pr releases unless they share some real details, but a long time ago one of their investors pitched their foundation model idea to me and i was privately very skeptical so, publicly, i'll admit that it seems like i was wrong!
2024-10-01 View on X
VentureBeat

MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”

Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …

2024-05-18
I'm glad that Jan was able to send out the Superalignment grants before he left. The $10M OpenAI invested into various projects, including ours, will hopefully enable a lot of great research. The tension between product and security in industry is one reason why I enjoy academia
2024-05-18 View on X
@janleike

[Thread] Superalignment team co-lead explains why he has left, says OpenAI's safety culture and processes took a backseat to shiny products over the past years

Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.

2023-10-16
Major takeaway here: even if your dataset doesn't contain any adversarial data, GPT-3.5 finetuning API can STILL compromise safety! Folks FTing models with the API will have to be careful. This work also quantifies just how fast models get misaligned when bad data is present.
2023-10-16 View on X
The Register

Researchers find that a modest amount of fine-tuning can bypass safety efforts aiming to prevent LLMs such as OpenAI's GPT-3.5 Turbo from spewing toxic content