2025-11-30
it would be interesting to understand whether the AI reviewers prefer AI writing. my understanding so far is that they do, but i wonder if anyone has looked into this quantitatively?
Nature
Pangram Labs: ~21% of the 75,800 peer reviews submitted for ICLR 2026, a major ML conference, were fully AI-generated, and 50%+ contained signs of AI use
By - Miryam Naddaf 0 — Miryam Naddaf is a science writer based in London. — Search author on: — PubMed Google Scholar
2025-11-29
it would be interesting to understand whether the AI reviewers prefer AI writing. my understanding so far is that they do, but i wonder if anyone has looked into this quantitatively?
Nature
Pangram Labs: ~21% of the 75,800 peer reviews submitted for ICLR 2026, a major ML conference, were fully AI-generated, and 50%+ contained signs of AI use
By - Miryam Naddaf 0 — Miryam Naddaf is a science writer based in London. — Search author on: — PubMed Google Scholar
2025-03-20
i think submitting ai papers to a venue without contacting the PCs is bad. Sakana reached out asking whether we would be willing to participate in their experiment for the workshop i'm organizing at ICLR, and i (we) said no. this shows a lack of respect for human reviewers time.
TechCrunch
AI startups Intology and Autoscience submitted AI-generated studies at a conference without disclosure and face criticism of co-opting peer review for publicity
Kyle Wiggers / TechCrunch : X: @intologyai , @pandaashwinee , @intologyai , @tuhinchakr , @sakanaailabs , @autoscienceai , @autoscienceai , and @dorialexander X: @intologyai : Zoc...
2024-10-01
i usually would never retweet these corporate pr releases unless they share some real details, but a long time ago one of their investors pitched their foundation model idea to me and i was privately very skeptical so, publicly, i'll admit that it seems like i was wrong!
VentureBeat
MIT spinoff Liquid AI debuts its non-transformer AI models LFM-1B, LFM-3B, and LFM-40B MoE, claiming they achieve “state-of-the-art performance at every scale”
Liquid AI, a startup co-founded by former researchers from the Massachusetts Institute of Technology (MIT) …
2024-05-18
I'm glad that Jan was able to send out the Superalignment grants before he left. The $10M OpenAI invested into various projects, including ours, will hopefully enable a lot of great research. The tension between product and security in industry is one reason why I enjoy academia
@janleike
[Thread] Superalignment team co-lead explains why he has left, says OpenAI's safety culture and processes took a backseat to shiny products over the past years
Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.
2023-10-16
Major takeaway here: even if your dataset doesn't contain any adversarial data, GPT-3.5 finetuning API can STILL compromise safety! Folks FTing models with the API will have to be careful. This work also quantifies just how fast models get misaligned when bad data is present.
The Register