Microsoft's AI safety team proposed technical standards for detecting AI-generated content, but its CSO declined to commit to using them across its platforms
AI-enabled deception now permeates our online lives. There are the high-profile cases you may easily spot …
The EU fines X €120M for breaching online content rules, the first fine under the DSA, citing issues like its deceptive blue checkmarks, after a two-year probe
‘Impose Sanctions...’ The Verge : EU fines X $140 million over ‘deceptive’ blue checkmarks Inc : Why Elon Musk's X Was Slapped With a $140 Million Fine Eva Terry / Deseret News : European Union fines ...
Palantir sues two ex-employees now working at Percepta, an “AI transformation company” launched by General Catalyst, and alleges deception and stolen documents
Dan Primack / Axios :
Interviews with security researchers about AI's potential for large-scale destruction, as experts remain divided and global regulatory frameworks lag
we still have agency and an opportunity to act. https://www.nytimes.com/... Stephen Witt / @stephenwitt : I'm on the front page of the New York Times with an article about “The A.I. Prompt That Could ...
GPT-5 hands-on: it exudes competence but doesn't feel like a dramatic leap ahead of other LLMs, and the pricing is aggressively competitive with other providers
And It Changes Everything Tyler Cowen / Marginal Revolution : GPT-5, a short and enthusiastic review GPT-5 : GPT-5 — Our hands-on review of OpenAI's newest model based on weeks of testing — The Ve...
During its GPT-5 livestream, OpenAI showed two charts that had scales all over the place, with Sam Altman later calling one “a mega chart screwup from us”
wen GPT-6?! correct on the blog though. https://x.com/... Shrey Kothari / @shreyk0 : who's making these graphs [image] Forums: news.ycombinator.com : Hacker News — What's going on with their SWE ben...
Cluely says its ARR hit $7M after signing a public company; a startup called Pickle says it built Glass, an open source, free product similar to Cluely
so much money sloshing around, if someone just snuck me a million I'd pay off all the loans of my family and just check out from work forever — and here multiple millions getting burned on fucking n...
Anthropic's test of 16 top AI models from OpenAI and others found that, in some cases, they resorted to malicious behavior to avoid replacement or achieve goals
Well, this little item in Axios' email thingie yesterday kept … Owotunse Adebayo / Cryptopolitan : Anthropic releases new safety report on AI models Matthias Bastian / The Decoder : Blackmail becomes ...
Anthropic's test of 16 top AI models from OpenAI and others found that, in some cases, they resorted to malicious behavior to avoid replacement or achieve goals
Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt …
Anthropic releases Opus 4 under stricter safety measures than any prior model after tests showed it could potentially aid novices in making biological weapons
www.anthropic.com/news/activat... Mary Branscombe / @marypcbuk : but no AI regulation by individual states in the US for the next ten years if the bill goes through [embedded post] Bancroft Sutherland...