/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders warn ‘AI poses risk of extinction’ Will Knight / Wired : Runaway AI Is an Extinction Risk, Experts Warn Priya Walia / OnMSFT.com : Eminent AI experts and corporate heads caution about the probability of ‘extinction risk’ Vanessa Romo / NPR : Leading experts warn of a risk of extinction from AI Terry Castleman / Los Angeles Times : Prominent AI leaders warn of ‘risk of extinction’ from new technology Casey Newton / Platformer : The AI hallucinations intensify Ben Lovejoy / 9to5Mac : AI could make humans extinct, say top experts and CEOs in stark warning Ryan Morrison / Tech Monitor : AI is an ‘extinction risk’ for humanity, say tech industry leaders MacDailyNews : Artificial Intelligence leaders warn of ‘risk of extinction’ from AI Thomas Germain / Gizmodo : ‘The Risk of Extinction:’ AI Leaders Agree on One-Sentence Warning About Technology's Future Michael Nuñez / VentureBeat : Top AI researchers and CEOs warn against ‘risk of extinction’ in joint statement Tristan Greene / Cointelegraph : AI experts sign doc comparing risk of ‘extinction from AI’ to pandemics, nuclear war Jake McKee / The i Paper : Artificial intelligence could lead to ‘extinction’ of humanity, warn dozens of AI experts Lucas Mearian / Computerworld : ChatGPT creators and others plead to reduce risk of global extinction from their tech The Irish Times : Mitigating risk of ‘extinction’ from AI technology should be ‘global priority’, experts say Steve Huff / Newser : In 22 Words, Tech Leaders Warn of Colossal AI Risks GovTech : Are AI execs worried there's a dark future looming for AI? Benj Edwards / Ars Technica : OpenAI execs warn of “risk of extinction” from artificial intelligence in new open letter Paul Gillin / SiliconANGLE : Coalition of AI leaders sees ‘societal-scale risks’ from the technology's misuse Charlize Alcaraz / BetaKit : Geoffrey Hinton, Yoshua Bengio warn “risk of extinction from AI” in public letter Geneva Abdul / The Guardian : Risk of extinction by AI should be global priority, say experts Billy Perrigo / TIME : AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation Shubham Verma / Techlusive : Top AI CEOs, experts warn that humans may face risk of extinction from AI Karandeep Oberoi / MobileSyrup : AI experts and industry leaders issue warning on AI risks Agence France-Presse : AI poses ‘extinction’ risk comparable to nuclear war, pandemics: experts Thomas Barrabi / New York Post : Top AI experts warn of tech's ‘risk of extinction’ — similar to nuclear weapons, pandemics Chris Smith / BGR : We can't put the ChatGPT AI genie back in the bottle, even if it means risking extinction Paul Lilly / HotHardware : AI Experts From Google, OpenAI And Elsewhere Issue Dire Extinction Warning Brian Fung / WRAL TechWire : ‘Extinction event’ from AI? Yes, tech leaders warn in call for controls Luke Jones / WinBuzzer : OpenAI and Google DeepMind CEOs Sign Statement Warning That AI Poses “Risk of Extinction” Associated Press : Artificial intelligence threatens extinction, experts say in new warning Robert Hart / Forbes : AI Could Cause Human ‘Extinction,’ Tech Leaders Warn ChinaTechNews.com : Artificial intelligence poses ‘risk of extinction,’ tech execs and experts warn The Hill : Scientists, experts saying mitigating ‘extinction’ risk of AI should be global priority Cristina Criddle / Financial Times : AI executives warn its threat to humanity rivals ‘pandemics and nuclear war’ Dan Hendrycks / AI Safety Newsletter : AI Safety Newsletter #8 LinkedIn: Brett Roberts : I'm no expert on AI but maybe, just maybe, we should listen to its industry leaders when they're practically begging for government regulation... … Ylli Bajraktari : I signed my name to ensure that #AI safety is a priority.  Check out the list of colleagues who are taking a stance to mitigate the advancement of AI's most severe risks: https://www.safe.ai/... … Sumant Ramachandra : “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war … Krystal Putman-Garcia : https://lnkd.in/ebZBBNWu  —  “A group of industry leaders is planning to warn on Tuesday that the artificial intelligence technology they are building … Jeff Jarvis : I'm just now writing a chapter about AI and media's reaction to it for a next book on the internet.  Every time I see tech chest-thumping like this, I am struck by media's credulity. … Bluesky: @toddindeed.bsky.social : OpenAI must be pretty thirsty for government regulation to tamp down competitors (esp open source ones) for him to talk so frantically about existential risk while continuing to push that alleged risk.  This is starting to feel like ripping up books to throw on the hype fire just to stay warm. Anil Dash / @anildash.com : To make it explicit for people who don't follow tech in this way: the lesson investors took from Uber being able to break the law & then get the law built *around* their exploration was that this is a great way to monopolize a market just as it's forming, and regulators will help.  Thus: AI “panic”. Yaël Eisenstat / @yaeleisenstat.bsky.social : This single statement signed by Google, OpenAI & Microsoft execs and “other notable figures” gets us to where the Cassandras were years ago.  I'm more interested in how they plan to ensure real people are not harmed, right now & in the future. https://www.safe.ai/... Dare Obasanjo / @carnage4life.bsky.social : If the top executives of the top AI companies believe AI creates a risk of human extinction, why don't they stop working on it instead of publishing press releases? Tweets: @ai_risks : We've released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more https://safe.ai/... Ryan Calo / @rcalo : You may be wondering: why are some of the very people who develop and deploy artificial intelligence sounding the alarm about it's existential threat? Consider two reasons— https://twitter.com/... Dan Hendrycks / @danhendrycks : We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. https://safe.ai/... 🧵 (1/6) Andrew Ng / @andrewyng : When I think of existential risks to large parts of humanity: * The next pandemic * Climate change→massive depopulation * Another asteroid AI will be a key part of our solution. So if you want humanity to survive & thrive the next 1000 years, lets make AI go faster, not slower. Justin Hendrix / @justinhendrix : Another sign-on statement about the existential risks of AI- this one signed by Google, Microsoft, OpenAI and other company execs and a slew of academics. The single-sentence statement, coordinated by the Center for AI Safety, is here. https://www.safe.ai/... [image] Toby Ord / @tobyordoxford : Today many of the key people in AI came together to make a one-sentence statement on AI risk: 1/n https://www.safe.ai/... [image] Dan Hendrycks / @danhendrycks : As stated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be addressed. William MacAskill / @willmacaskill : When the CEOs of all three of the leading AI labs publicly state that what they are building could spell the end of the human species... that's a big deal. This statement is so important; I'm proud to co-sign. https://www.safe.ai/... Yann LeCun / @ylecun : Super-human AI is nowhere near the top of the list of existential risks. In large part because it doesn't exist yet. Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature. https://twitter.com/... Jonathan Zittrain / @zittrain : Today, a crisp one-sentence open letter warning about existential AI threat: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” I did not sign the letter. https://twitter.com/... @tante : If the people signing this document https://www.safe.ai/... about “AI risk” were serious, they wouldn't keep building these systems and selling them (actually renting them out) on the open market. The list is advertising and useful idiots. Erik Brynjolfsson / @erikbryn : Humanity is creating a new technology, AI, with unimaginable power. Along with benefits, there are real risks that need to be taken seriously. That's why I'm joining many others in signing this statement. Please read it and let me know if you agree. https://www.safe.ai/... Rishi Sunak / @rishisunak : The government is looking very carefully at this. Last week I stressed to AI companies the importance of putting guardrails in place so development is safe and secure. But we need to work together. That's why I raised it at the @G7 and will do so again when I visit the US. https://twitter.com/... Sarah Frier / @sarahfrier : Lol at how so many AI execs are now being pitched to journalists as “experts” on the likelihood of an AI-induced extinction event. Very low stakes to talk about, gets the company name in the press. David Deutsch / @daviddeutschoxf : Hear, hear. Also fundamental scientific research. And economic growth. All of them, or any one of them, might easily become necessary to save the species. Within living memory, civilisation was saved by a breakthrough in metamathematics. https://twitter.com/... Ed Markey / @senmarkey : Time and time again, Big Tech's self-regulation has failed. Now, AI developers are admitting that their own products pose a risk of extinction while rushing to further develop it. This isn't just ridiculous—it's dangerous. https://www.nytimes.com/... Paul Feig / @paulfeig : We sci-fi nerds could have told you this decades ago. Stop working on this stuff! Shut it down now!!! https://www.nytimes.com/... Robert Wright / @robertwrighter : I don't get this. The impediments to dealing with climate change and pandemic risk are fundamentally political—nations failing to cooperate to solve non-zero-problems. How exactly are advances in AI going to help us with that? https://twitter.com/... Sean Spicer / @seanspicer : When all the leading AI scientists express concern about the possibility of human extinction on earth it might be worth taking seriously Statement on AI Risk | CAIS https://www.safe.ai/... Roon / @tszzl : meta/yann don't believe in ai risks because they don't believe in ai period. they think it's a gimmick with limited utility @brij : Interesting takeaway from this ‘rah rah’ about AI extinction risks is that this level of panic isn't good for Facebook/Meta, and potentially its surrounding ecosystem. Otherwise, what harm is there in participating in this PR posturing by lending your name to it? https://twitter.com/... Brian Merchant / @bcmerchant : This is inherently ridiculous, sorry. No one is making Google and OpenAI develop AI that puts humanity at “risk of extinction.” If they honestly thought it was such a dire threat they could stop building it *today*. They do not, so they won't. https://twitter.com/... Tyler Glaiel / @tylerglaiel : If they're serious about AI safety they should keep the ceos and execs far away from policy making. the list being full of AI executives does not bode well https://twitter.com/... @pinboard : @MikeIsaac The problem is much of Silicon Valley is in the doomsday cult, so arguing other risks with them is like trying to convince Pentecostals to care about long-term climate trends or habitat loss, when the Rapture is imminent Nirit Weiss-Blatt, PhD / @drtechlash : The current AI hype/panic cycle is so fucked up ... that we've reached the point where Max Tegmark is celebrating that “Extinction by AI - is going mainstream” (Thanks, mass media 🤦🏻‍♀️) https://twitter.com/... @ruchowdh : You know if they were so concerned they could just NOT build the technology. https://twitter.com/... Rat King / @mikeisaac : considering the economic, labor, legal and IP implications of AI development are far more compelling to me than worrying about doomsdays scenarios that don't exist yet https://twitter.com/... Melanie Mitchell / @melmitchell1 : Agreed! The message from Altman at al. seems to be “AI is so dangerous, powerful, and mysterious that only people at the top AI companies know enough to regulate it.” Regulatory capture is the point. https://twitter.com/... @0xhexhex : Apropos of this new “existential AI risk” thing... https://www.techmeme.com/... 1️⃣ Writing an open-letter is easier than solving anything 2️⃣ Creating a new bogeyman is a great way to deflect from *current* real risks of awful AI systems: none of which the signatories care for Steven Sinofsky / @stevesi : Presumably everyone who signed this and the companies they represent will also sign a pledge committing to return all company revenue and personal salary and profits from AI (direct and indirect) until the potential for extinction is permanently averted. https://twitter.com/... Robert Scoble / @scobleizer : I warned you last year. Now the industry signed a statement acknowledging existential risk. But the problem is that AI brings us many more immediate problems. Last week in Seattle I heard how law firms are preparing layoffs because AI is causing a drop in billable hours. I... https://twitter.com/... Blake Richards / @tyrell_turing : 1/4) Several people I admire immensely have signed this, but respectfully, I'm afraid I just don't agree with the claim that “mitigating the risk of extinction from AI should be a global priority”. I think this statement is naive and a mistake. https://twitter.com/... Nirit Weiss-Blatt, PhD / @drtechlash : An unsurprising collaboration between the creators of “AI Panic Marketing” (Sam Altman, Dario Amodei, Emad Mostaque) & the creators of “Panic-as-a-Business” (Eliezer Yudkowsky, Jaan Tallinn, Max Tegmark, Connor Leahy, Tristan Harris). History books will make fun of this moment. https://twitter.com/... Ari Cohn / @aricohn : It's amazing how people go on and on about how tech companies are not to be trusted, and then completely fail to interrogate the ulterior motives of tech CEOs who propose regulation of the thing they do that people are scared of. https://twitter.com/... Ian Goodfellow / @goodfellow_ian : I've spent several years studying machine learning security with the goal of making ML reliable before it is used in more and more important contexts. Unfortunately, ML capabilities and adoption are growing much faster than ML robustness. https://www.safe.ai/... @datasociety : Indeed, it “speaks volumes” about existing AI power structures that tech execs are so keen to come together to amplify talk of existential AI risk, yet reticent when it comes to publicly discussing the harms their tools are causing right now. @riptari https://techcrunch.com/... @gfodor : I accept the risks but reject the analogy to nuclear weapons and virology. That analogy is incredibly dangerous, since it leads people to think controlling specific atoms will stop matrix multiplications, or that stopping matrix multiplications de-risks AI. https://twitter.com/... Beatrice Nolan / @beafreyanolan : AI poses a risk of “extinction,” experts warn. CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement which compared the risks posed by AI to nuclear war and pandemics. @BusinessInsider https://www.businessinsider.com/ ... Matt Wolfe / @mreflow : One thing that pretty much all notable figures in AI agree on: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://twitter.com/... Demis Hassabis / @demishassabis : I've worked my whole life on AI because I believe in its incredible potential to advance science & medicine, and improve billions of people's lives. But as with any transformative technology we should apply the precautionary principle, and build & deploy it with exceptional care https://twitter.com/... Nate Silver / @natesilver538 : Sorta more interested in who *hasn't* signed this (here's looking at you, Facebook/Meta). https://www.safe.ai/... Noam Brown / @polynoamial : I signed this because I am concerned about the consequences of an arms race in AI. Preventing that requires global coordination. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://twitter.com/... Noah Kulwin / @nkulw : The hotdog guy in this instance is the New York Times lol https://twitter.com/... Noah Kulwin / @nkulw : Couldn't buy better PR if you tried https://twitter.com/... Patrick Daugherty / @rotopat : This has quickly become the self aggrandizement super bowl https://twitter.com/... David Krueger / @davidskrueger : I've worried AI could lead to human extinction ever since I heard about Deep Learning from Hinton's Coursera course, >10 years ago. So it's great to see so many AI researchers advocating for AI x-safety as a global priority. Let's stop arguing over it and figure out what to do! https://twitter.com/... Yaron Brook / @yaronbrook : What a cop-out. To the extent these risks are real, and many of them are, it's up to them, the developers and companies that own this technology and will use it, to come together and create industry standards. Stop running to government to solve your issues. This will lead to... https://twitter.com/... Ian Hogarth / @soundboy : Great see the leaders of Anthropic, DeepMind, OpenAI and others publicly acknowledging that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Bravo @DanHendrycks and @DavidSKrueger https://twitter.com/... Neil Chilson / @neil_chilson : Have so many smart people ever so emphatically said so little? “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That's it, that's the statement. https://twitter.com/... Parmy Olson / @parmy : Big names in AI have backed a single statement about catastrophic risk: https://www.safe.ai/... I've no doubt there's genuine concern among these folks, but this will also draw attention AWAY from existing efforts to regulate AI systems, like making them more transparent. [image] Frederike Kaltheuner / @f_kaltheuner : This narrative is partially being pushed by the very same tech CEOs that spent the last few weeks arguing that they should essentially regulate themselves. Meanwhile, AI is already causing real harm to people - lives are at risk right now. This is a coordinated distraction. @hrw https://twitter.com/... Robin Hanson / @robinhanson : Yet another MSM article on AI risk that only cites doomers. Do they really think the “science” is that settled here? https://www.nytimes.com/... Brianna Wu / @briannawu : I share extreme concerns with AI, but it's hard to take these individuals seriously when they're the libertarian oligarchs using their vast wealth to undermine to very government they are saying should regulate them. https://www.nytimes.com/... Angela Kane / @kaneview : via @NYTimes. I may not be one of the “godfathers” of AI but when invited to sign the one-sentence statement, I immediately did. Our voices need to be heard and acted upon. The urgency is clear. https://www.nytimes.com/... Gabe Hudson / @gabehudson : They're not warning they're bragging https://www.nytimes.com/... Zvi Mowshowitz / @thezvi : “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” I signed as well. A truly elegant minimum viable product in action. https://twitter.com/... Tom Westgarth / @tom_westgarth15 : A few thoughts on things I think can be true at the same time: 1.) Ignoring these concerns and calling them ‘sci-fi’ is like calling the Einstein-Roosevelt letter about fears of the atomic bomb ‘sci-fi’ 2.) Media should also be giving coverage to other AI harms already happening https://twitter.com/... Siva Vaidhyanathan / @sivavaid : This is so stupid. They want you to fear the fantasy so you don't look at the real damage being done to people right now. https://twitter.com/... Tyler John / @tyler_m_john : Breaking my Twitter fast to share this short statement from @ai_risks that my colleagues and I have signed: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://www.safe.ai/... @as_a_worker : seems weird AI guys keep arguing the thing they're building is going to kill us all. then you realize musk and the early entrants are using fear to a) hype their job destroying project to other capitalists b) urge the gov to give them a natural monopoly https://www.nytimes.com/... Max Tegmark / @tegmark : Extinction by AI is going mainstream: Altman, Hassabis, Amodei, Hinton, Bengio, ... https://www.nytimes.com/... Liron Shapira / @liron : Most key players have now formally acknowledged that AI existential risk is real! Next step: Acknowledging that superintelligent AI cannot be controlled in the foreseeable future, and we should therefore immediately stop figuring out how to build it! https://twitter.com/... Chris Anderson / @tedchris : I joined @sama @geoffreyhinton @demishassabis @willmacaskill Stuart Russell and many others I respect in signing this statement on AI safety. Arguably this is now the world's most important single issue: https://www.safe.ai/... Perry E. Metzger / @perrymetzger : They're not mentioning other Turing award winners, textbook authors, and executives with the opposite opinion. It's past time for those of us who think the panic is ridiculous and harmful to organize. https://twitter.com/... @jeffjarvis : Their macho chest-thumping is pure marketing.... A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn https://www.nytimes.com/... Jeff Nolan / @jeffnolan : “should be” but won't. The experts that create this technology say we should concerned. If this stuff is so dangerous, why TF did you devote your career to it? This movie plays all the time. “But but but I didn't know it could be used for bad stuff! Someone do something!” https://twitter.com/... Dan Hendrycks / @danhendrycks : AI researchers from leading universities worldwide have signed the AI extinction statement, a situation reminiscent of atomic scientists issuing warnings about the very technologies they've created. As Robert Oppenheimer noted, “We knew the world would not be the same.” 🧵(2/6) [image] @s_oheigeartaigh : Pleased to have signed this clear, concise, and important statement, alongside Turing Award winners, leaders of commercial AI labs, and academic and civil society experts: mitigating the risk of extinction from AI should be a global priority. https://www.safe.ai/... Shane Legg / @shanelegg : I signed this letter as I believe that AI is an exceptionally powerful technology that must be handled with great care https://twitter.com/... Emad / @emostaque : We got (almost) the whole AI crew together: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Our focus is on inputs & open-based resilience - healthier free range, organic models https://twitter.com/... Michael A Osborne / @maosbot : It would be difficult to have assembled a more comprehensive list of signatories https://twitter.com/... Adrian Weckler / @adrianweckler : Panicky or visionary? https://twitter.com/... Jacy Reese Anthis / @jacyanthis : In a newfound consensus, AI scientists and executives (Hinton, Bengio, Hassabis, Altman, Amodei, etc.) warn, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://www.nytimes.com/... Julian Hazell / @mealreplacer : “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” There are too many notable people who signed this statement for me to list. This is huge. https://www.safe.ai/... Connor Axiotes / @connoraxiotes : Ex-Google Geoffrey Hinton, @OpenAI's top brass (including @sama and @ilyasut), and @AnthropicAI CEO, and @DeepMind all just put their signatures on the following statement: “Mitigating the risk of extinction from AI should be a global priority.” https://www.safe.ai/... https://twitter.com/... @katjagrace : This seems promising https://www.safe.ai/... (organized by @ai_risks) [image] Robert Wiblin / @robertwiblin : “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” — Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei, many more https://www.safe.ai/... [image] Shakeel / @shakeelhashim : Turing Award winners Yoshua Bengio and @geoffreyhinton, along with the CEOs of the three major AI labs (@demishassabis, @sama, Dario Amodei) and many, many other AI experts, have signed a statement saying that mitigating the risk of extinction from AI should be a global priority. [image] Haydn Belfield / @haydnbelfield : I was proud to sign this statement alongside a remarkable list of experts https://twitter.com/... Dan Hendrycks / @danhendrycks : @ai_risks Thanks to @DavidSKrueger, who had the idea to have a single-sentence statement about AI risk and jointly helped with its development. Also thanks also to the project managers at @ai_risks and various volunteers. https://safe.ai/... 🧵(6/6)

New York Times Kevin Roose

Discussion

  • @toddindeed.bsky.social @toddindeed.bsky.social on bluesky
    OpenAI must be pretty thirsty for government regulation to tamp down competitors (esp open source ones) for him to talk so frantically about existential risk while continuing to push that alleged risk.  This is starting to feel like ripping up books to throw on the hype fire just…
  • @anildash.com Anil Dash on bluesky
    To make it explicit for people who don't follow tech in this way: the lesson investors took from Uber being able to break the law & then get the law built *around* their exploration was that this is a great way to monopolize a market just as it's forming, and regulators will help…
  • @yaeleisenstat.bsky.social Yaël Eisenstat on bluesky
    This single statement signed by Google, OpenAI & Microsoft execs and “other notable figures” gets us to where the Cassandras were years ago.  I'm more interested in how they plan to ensure real people are not harmed, right now & in the future. https://www.safe.ai/...
  • @carnage4life.bsky.social Dare Obasanjo on bluesky
    If the top executives of the top AI companies believe AI creates a risk of human extinction, why don't they stop working on it instead of publishing press releases?
  • @ai_risks @ai_risks on x
    We've released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more https://safe.ai/...
  • @danhendrycks Dan Hendrycks on x
    We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. https://safe.ai/... 🧵 (1/6)
  • @rcalo Ryan Calo on x
    You may be wondering: why are some of the very people who develop and deploy artificial intelligence sounding the alarm about it's existential threat? Consider two reasons— https://twitter.com/...
  • @andrewyng Andrew Ng on x
    When I think of existential risks to large parts of humanity: * The next pandemic * Climate change→massive depopulation * Another asteroid AI will be a key part of our solution. So if you want humanity to survive & thrive the next 1000 years, lets make AI go faster, not slower.
  • @justinhendrix Justin Hendrix on x
    Another sign-on statement about the existential risks of AI- this one signed by Google, Microsoft, OpenAI and other company execs and a slew of academics. The single-sentence statement, coordinated by the Center for AI Safety, is here. https://www.safe.ai/... [image]
  • @tobyordoxford Toby Ord on x
    Today many of the key people in AI came together to make a one-sentence statement on AI risk: 1/n https://www.safe.ai/... [image]
  • @danhendrycks Dan Hendrycks on x
    As stated in the first sentence of the signatory page, there are many “important and urgent risks from AI,” not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization. These are all important risks that need to be a…
  • @willmacaskill William MacAskill on x
    When the CEOs of all three of the leading AI labs publicly state that what they are building could spell the end of the human species... that's a big deal. This statement is so important; I'm proud to co-sign. https://www.safe.ai/...
  • @ylecun Yann LeCun on x
    Super-human AI is nowhere near the top of the list of existential risks. In large part because it doesn't exist yet. Until we have a basic design for even dog-level AI (let alone human level), discussing how to make it safe is premature. https://twitter.com/...
  • @tante @tante on x
    If the people signing this document https://www.safe.ai/... about “AI risk” were serious, they wouldn't keep building these systems and selling them (actually renting them out) on the open market. The list is advertising and useful idiots.
  • @zittrain Jonathan Zittrain on x
    Today, a crisp one-sentence open letter warning about existential AI threat: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” I did not sign the letter. https://twitter.com/...
  • @erikbryn Erik Brynjolfsson on x
    Humanity is creating a new technology, AI, with unimaginable power. Along with benefits, there are real risks that need to be taken seriously. That's why I'm joining many others in signing this statement. Please read it and let me know if you agree. https://www.safe.ai/...
  • @drtechlash Nirit Weiss-Blatt, PhD on x
    The current AI hype/panic cycle is so fucked up ... that we've reached the point where Max Tegmark is celebrating that “Extinction by AI - is going mainstream” (Thanks, mass media 🤦🏻‍♀️) https://twitter.com/...
  • @brij @brij on x
    Interesting takeaway from this ‘rah rah’ about AI extinction risks is that this level of panic isn't good for Facebook/Meta, and potentially its surrounding ecosystem. Otherwise, what harm is there in participating in this PR posturing by lending your name to it? https://twitter.…
  • @ruchowdh @ruchowdh on x
    You know if they were so concerned they could just NOT build the technology. https://twitter.com/...
  • @bcmerchant Brian Merchant on x
    This is inherently ridiculous, sorry. No one is making Google and OpenAI develop AI that puts humanity at “risk of extinction.” If they honestly thought it was such a dire threat they could stop building it *today*. They do not, so they won't. https://twitter.com/...
  • @rishisunak Rishi Sunak on x
    The government is looking very carefully at this. Last week I stressed to AI companies the importance of putting guardrails in place so development is safe and secure. But we need to work together. That's why I raised it at the @G7 and will do so again when I visit the US. https:…
  • @sarahfrier Sarah Frier on x
    Lol at how so many AI execs are now being pitched to journalists as “experts” on the likelihood of an AI-induced extinction event. Very low stakes to talk about, gets the company name in the press.
  • @daviddeutschoxf David Deutsch on x
    Hear, hear. Also fundamental scientific research. And economic growth. All of them, or any one of them, might easily become necessary to save the species. Within living memory, civilisation was saved by a breakthrough in metamathematics. https://twitter.com/...
  • @senmarkey Ed Markey on x
    Time and time again, Big Tech's self-regulation has failed. Now, AI developers are admitting that their own products pose a risk of extinction while rushing to further develop it. This isn't just ridiculous—it's dangerous. https://www.nytimes.com/...
  • @tylerglaiel Tyler Glaiel on x
    If they're serious about AI safety they should keep the ceos and execs far away from policy making. the list being full of AI executives does not bode well https://twitter.com/...
  • @tszzl Roon on x
    meta/yann don't believe in ai risks because they don't believe in ai period. they think it's a gimmick with limited utility
  • @pinboard @pinboard on x
    @MikeIsaac The problem is much of Silicon Valley is in the doomsday cult, so arguing other risks with them is like trying to convince Pentecostals to care about long-term climate trends or habitat loss, when the Rapture is imminent
  • @paulfeig Paul Feig on x
    We sci-fi nerds could have told you this decades ago. Stop working on this stuff! Shut it down now!!! https://www.nytimes.com/...
  • @robertwrighter Robert Wright on x
    I don't get this. The impediments to dealing with climate change and pandemic risk are fundamentally political—nations failing to cooperate to solve non-zero-problems. How exactly are advances in AI going to help us with that? https://twitter.com/...
  • @mikeisaac Rat King on x
    considering the economic, labor, legal and IP implications of AI development are far more compelling to me than worrying about doomsdays scenarios that don't exist yet https://twitter.com/...
  • @seanspicer Sean Spicer on x
    When all the leading AI scientists express concern about the possibility of human extinction on earth it might be worth taking seriously Statement on AI Risk | CAIS https://www.safe.ai/...
  • @melmitchell1 Melanie Mitchell on x
    Agreed! The message from Altman at al. seems to be “AI is so dangerous, powerful, and mysterious that only people at the top AI companies know enough to regulate it.” Regulatory capture is the point. https://twitter.com/...
  • @0xhexhex @0xhexhex on x
    Apropos of this new “existential AI risk” thing... https://www.techmeme.com/... 1️⃣ Writing an open-letter is easier than solving anything 2️⃣ Creating a new bogeyman is a great way to deflect from *current* real risks of awful AI systems: none of which the signatories care for
  • @stevesi Steven Sinofsky on x
    Presumably everyone who signed this and the companies they represent will also sign a pledge committing to return all company revenue and personal salary and profits from AI (direct and indirect) until the potential for extinction is permanently averted. https://twitter.com/...
  • @scobleizer Robert Scoble on x
    I warned you last year. Now the industry signed a statement acknowledging existential risk. But the problem is that AI brings us many more immediate problems. Last week in Seattle I heard how law firms are preparing layoffs because AI is causing a drop in billable hours. I... htt…
  • @tyrell_turing Blake Richards on x
    1/4) Several people I admire immensely have signed this, but respectfully, I'm afraid I just don't agree with the claim that “mitigating the risk of extinction from AI should be a global priority”. I think this statement is naive and a mistake. https://twitter.com/...
  • @drtechlash Nirit Weiss-Blatt, PhD on x
    An unsurprising collaboration between the creators of “AI Panic Marketing” (Sam Altman, Dario Amodei, Emad Mostaque) & the creators of “Panic-as-a-Business” (Eliezer Yudkowsky, Jaan Tallinn, Max Tegmark, Connor Leahy, Tristan Harris). History books will make fun of this moment. h…
  • @aricohn Ari Cohn on x
    It's amazing how people go on and on about how tech companies are not to be trusted, and then completely fail to interrogate the ulterior motives of tech CEOs who propose regulation of the thing they do that people are scared of. https://twitter.com/...
  • @goodfellow_ian Ian Goodfellow on x
    I've spent several years studying machine learning security with the goal of making ML reliable before it is used in more and more important contexts. Unfortunately, ML capabilities and adoption are growing much faster than ML robustness. https://www.safe.ai/...
  • @datasociety @datasociety on x
    Indeed, it “speaks volumes” about existing AI power structures that tech execs are so keen to come together to amplify talk of existential AI risk, yet reticent when it comes to publicly discussing the harms their tools are causing right now. @riptari https://techcrunch.com/...
  • @gfodor @gfodor on x
    I accept the risks but reject the analogy to nuclear weapons and virology. That analogy is incredibly dangerous, since it leads people to think controlling specific atoms will stop matrix multiplications, or that stopping matrix multiplications de-risks AI. https://twitter.com/..…
  • @beafreyanolan Beatrice Nolan on x
    AI poses a risk of “extinction,” experts warn. CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement which compared the risks posed by AI to nuclear war and pandemics. @BusinessInsider https://www.businessinsider.com/ ...
  • @mreflow Matt Wolfe on x
    One thing that pretty much all notable figures in AI agree on: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://twitter.com/...
  • @demishassabis Demis Hassabis on x
    I've worked my whole life on AI because I believe in its incredible potential to advance science & medicine, and improve billions of people's lives. But as with any transformative technology we should apply the precautionary principle, and build & deploy it with exceptional care …
  • @natesilver538 Nate Silver on x
    Sorta more interested in who *hasn't* signed this (here's looking at you, Facebook/Meta). https://www.safe.ai/...
  • @polynoamial Noam Brown on x
    I signed this because I am concerned about the consequences of an arms race in AI. Preventing that requires global coordination. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https:/…
  • @nkulw Noah Kulwin on x
    The hotdog guy in this instance is the New York Times lol https://twitter.com/...
  • @nkulw Noah Kulwin on x
    Couldn't buy better PR if you tried https://twitter.com/...
  • @rotopat Patrick Daugherty on x
    This has quickly become the self aggrandizement super bowl https://twitter.com/...
  • @davidskrueger David Krueger on x
    I've worried AI could lead to human extinction ever since I heard about Deep Learning from Hinton's Coursera course, >10 years ago. So it's great to see so many AI researchers advocating for AI x-safety as a global priority. Let's stop arguing over it and figure out what to do! h…
  • @yaronbrook Yaron Brook on x
    What a cop-out. To the extent these risks are real, and many of them are, it's up to them, the developers and companies that own this technology and will use it, to come together and create industry standards. Stop running to government to solve your issues. This will lead to... …
  • @soundboy Ian Hogarth on x
    Great see the leaders of Anthropic, DeepMind, OpenAI and others publicly acknowledging that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Bravo @DanHendrycks and @DavidSKrueger https…
  • @neil_chilson Neil Chilson on x
    Have so many smart people ever so emphatically said so little? “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That's it, that's the statement. https://twitter.com/...
  • @parmy Parmy Olson on x
    Big names in AI have backed a single statement about catastrophic risk: https://www.safe.ai/... I've no doubt there's genuine concern among these folks, but this will also draw attention AWAY from existing efforts to regulate AI systems, like making them more transparent. [image]
  • @f_kaltheuner Frederike Kaltheuner on x
    This narrative is partially being pushed by the very same tech CEOs that spent the last few weeks arguing that they should essentially regulate themselves. Meanwhile, AI is already causing real harm to people - lives are at risk right now. This is a coordinated distraction. @hrw …
  • @robinhanson Robin Hanson on x
    Yet another MSM article on AI risk that only cites doomers. Do they really think the “science” is that settled here? https://www.nytimes.com/...
  • @briannawu Brianna Wu on x
    I share extreme concerns with AI, but it's hard to take these individuals seriously when they're the libertarian oligarchs using their vast wealth to undermine to very government they are saying should regulate them. https://www.nytimes.com/...
  • @kaneview Angela Kane on x
    via @NYTimes. I may not be one of the “godfathers” of AI but when invited to sign the one-sentence statement, I immediately did. Our voices need to be heard and acted upon. The urgency is clear. https://www.nytimes.com/...
  • @gabehudson Gabe Hudson on x
    They're not warning they're bragging https://www.nytimes.com/...
  • @thezvi Zvi Mowshowitz on x
    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” I signed as well. A truly elegant minimum viable product in action. https://twitter.com/...
  • @tom_westgarth15 Tom Westgarth on x
    A few thoughts on things I think can be true at the same time: 1.) Ignoring these concerns and calling them ‘sci-fi’ is like calling the Einstein-Roosevelt letter about fears of the atomic bomb ‘sci-fi’ 2.) Media should also be giving coverage to other AI harms already happening …
  • @sivavaid Siva Vaidhyanathan on x
    This is so stupid. They want you to fear the fantasy so you don't look at the real damage being done to people right now. https://twitter.com/...
  • @tyler_m_john Tyler John on x
    Breaking my Twitter fast to share this short statement from @ai_risks that my colleagues and I have signed: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://www.safe.ai/...
  • @as_a_worker @as_a_worker on x
    seems weird AI guys keep arguing the thing they're building is going to kill us all. then you realize musk and the early entrants are using fear to a) hype their job destroying project to other capitalists b) urge the gov to give them a natural monopoly https://www.nytimes.com/..…
  • @tegmark Max Tegmark on x
    Extinction by AI is going mainstream: Altman, Hassabis, Amodei, Hinton, Bengio, ... https://www.nytimes.com/...
  • @liron Liron Shapira on x
    Most key players have now formally acknowledged that AI existential risk is real! Next step: Acknowledging that superintelligent AI cannot be controlled in the foreseeable future, and we should therefore immediately stop figuring out how to build it! https://twitter.com/...
  • @tedchris Chris Anderson on x
    I joined @sama @geoffreyhinton @demishassabis @willmacaskill Stuart Russell and many others I respect in signing this statement on AI safety. Arguably this is now the world's most important single issue: https://www.safe.ai/...
  • @perrymetzger Perry E. Metzger on x
    They're not mentioning other Turing award winners, textbook authors, and executives with the opposite opinion. It's past time for those of us who think the panic is ridiculous and harmful to organize. https://twitter.com/...
  • @jeffjarvis @jeffjarvis on x
    Their macho chest-thumping is pure marketing.... A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn https://www.nytimes.com/...
  • @jeffnolan Jeff Nolan on x
    “should be” but won't. The experts that create this technology say we should concerned. If this stuff is so dangerous, why TF did you devote your career to it? This movie plays all the time. “But but but I didn't know it could be used for bad stuff! Someone do something!” https:/…
  • @danhendrycks Dan Hendrycks on x
    AI researchers from leading universities worldwide have signed the AI extinction statement, a situation reminiscent of atomic scientists issuing warnings about the very technologies they've created. As Robert Oppenheimer noted, “We knew the world would not be the same.” 🧵(2/6) [i…
  • @s_oheigeartaigh @s_oheigeartaigh on x
    Pleased to have signed this clear, concise, and important statement, alongside Turing Award winners, leaders of commercial AI labs, and academic and civil society experts: mitigating the risk of extinction from AI should be a global priority. https://www.safe.ai/...
  • @shanelegg Shane Legg on x
    I signed this letter as I believe that AI is an exceptionally powerful technology that must be handled with great care https://twitter.com/...
  • @emostaque Emad on x
    We got (almost) the whole AI crew together: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Our focus is on inputs & open-based resilience - healthier free range, organic models https:…
  • @maosbot Michael A Osborne on x
    It would be difficult to have assembled a more comprehensive list of signatories https://twitter.com/...
  • @adrianweckler Adrian Weckler on x
    Panicky or visionary? https://twitter.com/...
  • @jacyanthis Jacy Reese Anthis on x
    In a newfound consensus, AI scientists and executives (Hinton, Bengio, Hassabis, Altman, Amodei, etc.) warn, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” https://www.nytimes.com/...
  • @mealreplacer Julian Hazell on x
    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” There are too many notable people who signed this statement for me to list. This is huge. https://www.safe.ai/...
  • @connoraxiotes Connor Axiotes on x
    Ex-Google Geoffrey Hinton, @OpenAI's top brass (including @sama and @ilyasut), and @AnthropicAI CEO, and @DeepMind all just put their signatures on the following statement: “Mitigating the risk of extinction from AI should be a global priority.” https://www.safe.ai/... https://tw…
  • @katjagrace @katjagrace on x
    This seems promising https://www.safe.ai/... (organized by @ai_risks) [image]
  • @robertwiblin Robert Wiblin on x
    “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” — Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei, many more https://www.safe.ai/... [image]
  • @shakeelhashim Shakeel on x
    Turing Award winners Yoshua Bengio and @geoffreyhinton, along with the CEOs of the three major AI labs (@demishassabis, @sama, Dario Amodei) and many, many other AI experts, have signed a statement saying that mitigating the risk of extinction from AI should be a global priority.…
  • @haydnbelfield Haydn Belfield on x
    I was proud to sign this statement alongside a remarkable list of experts https://twitter.com/...
  • @danhendrycks Dan Hendrycks on x
    @ai_risks Thanks to @DavidSKrueger, who had the idea to have a single-sentence statement about AI risk and jointly helped with its development. Also thanks also to the project managers at @ai_risks and various volunteers. https://safe.ai/... 🧵(6/6)