A deep dive into AI as a normal technology vs. a humanlike intelligence and how major public policy based on controlling superintelligence may make things worse
An alternative to the vision of AI as a potential superintelligence — We articulate a vision of artificial intelligence (AI) as normal technology. Bluesky: @taumuyi , @knightcolu...
An interview with Arvind Narayanan and Sayash Kapoor on their new book AI Snake Oil, which is based on their popular newsletter about AI's shortcomings
A New Book by 2 Princeton University Computer Scientists X: Eric Topol / @erictopol : Is #AI snake oil? Some of it is, as asserted by @random_walker and @sayashk in a new book publ...
More than 100 top AI researchers sign an open letter imploring AI companies to provide a legal and technical safe harbor for researchers to study their products
Tech company policies have put a chill on independent AI research, says open letter — More than 100 top artificial intelligence …
Stanford unveils the Foundation Model Transparency Index, featuring 100 indicators; Llama 2 led at 54%, GPT-4 placed third at 48%, and PaLM 2 took fifth at 40%
https://www.nytimes.com/... [image] Mark Coggins / @coggins@mastodon.social : This is the kind of needed AI regulation—requiring model makers to reveal how they trained their lang...
OpenAI may have tested GPT-4 on its training data, violating the cardinal rule of ML, and GPT-4's exam performance says little about its real-world usefulness
OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.