Research: the number of freelance jobs on platforms like Upwork, in areas where generative AI excels, have dropped by as much as 21% since ChatGPT's debut
There's now data to back up what freelancers have been saying for months — Jennifer Kelly, a freelance copywriter in the picturesque …
Adobe used images created by tools like Midjourney and uploaded to its stock marketplace by users, to train Firefly; Adobe says ~5% of images were AI-generated
Researchers say they haven't found “strikingly novel compounds” after analyzing a subset of the 2.2M new crystals DeepMind claimed its AI tool GNoME discovered
In November, Google's AI outfit DeepMind published a press release titled “Millions of new materials discovered with deep learning.”
Stanford researchers: LAION-5B, a dataset of 5B+ images used by Stability AI and others, contains 1,008+ instances of CSAM, possibly helping AI to generate CSAM
most prominently, Stable Diffusion 1.5—to see to what degree CSAM itself might be present in the training data. https://purl.stanford.edu/... Alex Stamos / @alex.stamos : Lots of p...
Stanford researchers: LAION-5B, a dataset of 5B+ images used by Stability AI and others, contains 1,008+ instances of CSAM, possibly helping AI to generate CSAM
most prominently, Stable Diffusion 1.5—to see to what degree CSAM itself might be present in the training data. https://purl.stanford.edu/... Alex Stamos / @alex.stamos : Lots of p...
Stanford researchers: LAION-5B, a dataset of 5B images used by Stability AI and others, contains 1,008 instances of CSAM, possibly helping to create AI CSAM
The dataset has been used to build popular AI image generators, including Stable Diffusion. — A massive public dataset used …
Source: a breakthrough spearheaded by OpenAI chief scientist Ilya Sutskever enabled a model that could solve basic math problems, stoking excitement and concern
One day before he was fired by OpenAI's board last week, Sam Altman alluded to a recent technical advance the company …
Sam Altman's return at OpenAI, partly formed on effective altruism principles, revealed hard limits and caps a bruising year for the divisive social movement
Sam Altman's firing showed the influence of effective altruism and its view that AI development must slow down; his return marked its limits
A look at the years of warnings about AI from researchers, including several women of color, who say we need to take the problems and risks seriously today
Today the risks of artificial intelligence are clear — but the warning signs have been there all along — T — IMNIT GEBRU DIDN'T set out to work in AI.
Framing AI debates as a schism between people worried about AI going rogue and those illuminating actual harms is ahistorical and obscures important research
In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot … Bluesky: @abeba.bsky.social , @mmitchell.bsky.soc...
OpenAI forms Superalignment, a team for developing ways to steer and control “superintelligent” AI systems, with access to 20% of its compute secured to date
OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company's co-founders …
Documents show OpenAI lobbied for parts of the EU's AI Act to be watered down, including successfully avoiding its general purpose AI being deemed “high risk”
The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds …
Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods
and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...
Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods
and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...
Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods
and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...
Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods
and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...
Meta AI and Papers with Code pull Galactica three days after launch, amid criticism the large language model for generating scientific text asserts falsehoods
and its hubris—show once again that Big Tech has a blind spot about the severe limitations of large language models. https://www.technologyreview.com/ ...
A look at Eric Schmidt's push to profit from an AI cold war between the US and China; CB Insights: Schmidt took part in investing $2B+ in AI-focused companies
both to democracy and to his own interests. https://www.protocol.com/... Kate Kaye / @katekayereports : Schmidt's story is an exploration of how a private sector tech mogul has pla...
A look at advanced large language models, as Google places an engineer on paid leave after he became convinced that its LaMDA chatbot generator was sentient
AI ethicists warned Google not to impersonate humans. Now one of Google's own thinks there's a ghost in the machine.