Current and former OpenAI and Google DeepMind staff sign an open letter warning of a culture of recklessness and secrecy at “frontier AI companies”
Detail how AI unlocks new possibilities in the data center and cloud, PC … Ronak Kumar / The Crypto Times : OpenAI & Google DeepMind Employees Warn of AI Risks Webb Wright / The Drum : AI researchers with ties to OpenAI and Google speak up for whistleblower protections Stefanie Schappert / Cybernews.com : AI workers call on OpenAI, Google DeepMind to pledge accountability Rowan Cheung / The Rundown AI : AI whistleblowers speak out Chris Smith / BGR : Whistleblowers might be the best way to keep us safe from AI like ChatGPT Meera Navlakha / Mashable : OpenAI, Google DeepMind insiders have serious warnings about AI Mike Dalton / CryptoSlate : OpenAI, DeepMind insiders demand AI whistleblower protections Kurt Robson / Verdict : OpenAI and Google DeepMind staff warn AI may lead to human extinction Will Knight / Wired : OpenAI Employees Warn of a Culture of Risk and Retaliation Ananya Gairola / Benzinga : Former Engineer At Sam Altman-Led OpenAI Says He Resigned After Losing Confidence In The Company: ‘Silencing Researchers And Making Them Afraid Of Retaliation Is Dangerous...’ South China Morning Post : OpenAI, Google DeepMind employees sign open letter calling for whistle-blower protections to speak out on AI risks Proactiveinvestors UK : OpenAI and Google Deepminds current and former employs raise the alarm over AI in letter Daily Sabah : Current, former OpenAI, Google DeepMind employees warn of AI risks Madeline Berg / Business Insider : It's all unraveling at OpenAI (again) InformationWeek : Employees of Google Deepmind and Anthropic also signed the letter which warns of serious risks due to a lack of oversight and transparency in artificial intelligence. PYMNTS.com : OpenAI, Google DeepMind Employees Look to Voice Concerns About AI Matt O'Brien / Associated Press : Former OpenAI employees lead push to protect whistleblowers flagging artificial intelligence risks Troy Wolverton / San Francisco Examiner : OpenAI, Google workers call on companies to let them discuss risks Rushil Agrawal / Android Authority : AI insiders call for for industry safety and whistleblower protection (Updated: OpenAI's response) Hayden Field / CNBC : Current and former OpenAI employees warn of AI's ‘serious risk’ and lack of oversight Kelvin Munene Murithi / CoinGape : AI Risks Spark Concern Among OpenAI, Anthropic, Google DeepMind Staff Casper Smith / CoinXposure : AI Risks Concern Staff at OpenAI, Anthropic, and Google DeepMind Michael Kan / PCMag : OpenAI Staffers Demand Right to Warn the Public About AI Dangers Pranav Dixit / Engadget : AI workers demand stronger whistleblower protections in open letter MacDailyNews : OpenAI employees warn of serious risks posed by advanced AI Matthias Bastian / The Decoder : Former and current employees of mostly OpenAI warn of risks of advanced AI Emily Jarvie / Proactive : OpenAI and Google employees call for oversight, whisteblower protection to curb AI risks Laura Bratton / Quartz : Former OpenAI employees say AI companies pose ‘serious risks.’ Read their open letter Emilia David / The Verge : Former OpenAI employees say whistleblower protection on AI safety is not enough John Gruber / Daring Fireball : Open Letter From AI Researchers: ‘A Right to Warn About Advanced Artificial Intelligence’ Thomas Barrabi / New York Post : Employees claim OpenAI, Google ignoring risks of AI — and should give them ‘right to warn’ public Gary Marcus / Marcus on AI : “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance” The Hill : AI whistleblowers warn of dangers, call for transparency Alyssa Lukpat / Wall Street Journal : AI Employees Fear They Aren't Free to Voice Their Concerns Maggie Harrison / Futurism : OpenAI Insiders Say They're Being Silenced About Danger Threads: Brian Stelter / @brianstelter : “When I signed up for OpenAI, I did not sign up for this attitude of 'Let's put things out into the world and see what happens and fix them afterward.'” https://www.nytimes.com/... Mastodon: Jeff Jarvis / @jeffjarvis@mastodon.social : The problem with these AI “safety” stories is that we need to know whether safety is defined as current risk or doomer nuttiness to judge the parties & what they say.... OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance https://www.nytimes.com/... X: Dwarkesh Patel / @dwarkesh_sp : .@leopoldasch on: - the trillion dollar cluster - unhobblings + scaling = 2027 AGI - CCP espionage at AI labs - leaving OpenAI and starting an AGI investment firm - dangers of outsourcing clusters to the Middle East - The Project Full episode (including the last 32 minutes cut [video] Joshua Achiam / @jachiam0 : 6/ But the disclosure of confidential information from frontier labs, however well-intentioned, can be outright dangerous. This letter asks for a policy that would in effect give safety staff carte blanche to make disclosures at will, based on their own judgement. Jacob Hilton / @jacobhhilton : In order for @OpenAI and other AI companies to be held accountable to their own commitments on safety, security, governance and ethics, the public must have confidence that employees will not be retaliated against for speaking out. (Thread) Rob Bensinger / @robbensinger : If this summary by @leopoldasch was accurate, then it seems that “Leopold leaked OpenAI secrets” was a lie the whole time. Tolga Bilge / @tolgabilge_ : That they have got 4 current OpenAI employees to sign this statement is remarkable and shows the level of dissent and concern still within the company. However, it's worth noting that they signed it anonymously, likely anticipating retaliation if they put their names to it. [image] Neel Nanda / @neelnanda5 : I second Jacob's reasons for why we signed this statement. Volunteer commitments are great, but robust whistleblower protections are important part of making them trustworthy and reliable, especially to broader society. Nathan Labenz / @labenz : I strongly support frontier AI lab employees' “Right to Warn” the rest of us about AI risks Good initiative here! Andrew Curran / @andrewcurran_ : A group of current, and former, OpenAI employees - some of them anonymous - along with Yoshua Bengio, Geoffrey Hinton, and Stuart Russell have released an open letter this morning entitled ‘A Right to Warn about Advanced Artificial Intelligence’. https://righttowarn.ai/ [image] Stefan Schubert / @stefanfschubert : Right to Warn seems like an example of how a warning sign (or shot, but that seems too strong) might work with respect to a specific issue. In my view, many underestimate the potential impact of such perceived warnings on people's actions. https://righttowarn.ai/ Jacob Hilton / @jacobhhilton : Currently, the main way for AI companies to provide assurances to the public is through voluntary public commitments. But there is no good way for the public to tell if the company is actually sticking to these commitments, and no incentive for the company to be transparent. Daniel Ziegler / @d_m_ziegler : I worked at OpenAI from 2018 to 2021. Almost everyone I met there cared a lot about developing AGI in a way that would benefit society, and I'm sure the same is true at other AI companies. Daniel Ziegler / @d_m_ziegler : Frontier AI companies are aiming to build one of the most transformative technologies in history. For that to happen safely, it should be done in a context that encourages caution and public oversight. Neel Nanda / @neelnanda5 : I signed this appeal for frontier AI companies to guarantee employees a right to warn. This was NOT because I currently have anything I want to warn about at my current or former employers, or specific critiques of their attitudes towards whistleblowers. https://righttowarn.ai/ Joshua Achiam / @jachiam0 : So - there is a letter circulating now from former and current AGI frontier lab staff, advocating for a particular policy around whistleblower protections on safety and risk issues ( https://righttowarn.ai/). I am not a signatory to this letter. Some thoughts. Daniel Kokotajlo / @dkokotajlo67142 : 5/15: My wife and I thought hard about it and decided that my freedom to speak up in the future was more important than the equity. I told OpenAI that I could not sign because I did not think the policy was ethical; they accepted my decision, and we parted ways. Daniel Kokotajlo / @dkokotajlo67142 : 10/15: Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public. Daniel Kokotajlo / @dkokotajlo67142 : 11/15: I applaud OpenAI for promising to change these policies! Daniel Kokotajlo / @dkokotajlo67142 : 9/15: Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to “move fast and break things.” Daniel Kokotajlo / @dkokotajlo67142 : 2/15: I joined with the hope that we would invest much more in safety research as our systems became more capable, but OpenAI never made this pivot. People started resigning when they realized this. I was not the first or last to do so. Daniel Kokotajlo / @dkokotajlo67142 : 14/15: Some of us who recently resigned from OpenAI have come together to ask for a broader commitment to transparency from the labs. You can read about it here: https://righttowarn.ai/ Kevin Roose / @kevinroose : Breaking: a group of current and former OpenAI employees is speaking out about what they say is a culture of recklessness and secrecy at the company. They are asking for a “right to warn” for employees of frontier AI labs. [image] LinkedIn: Karl Haller : “When I [joined] for OpenAI, I did not sign up for this attitude of 'Let's put things out into the world and see what happens and fix them afterward,'” … Matthew Small : You may debate their predictions, but they come at great personal cost and shouldn't be taken lightly for that reason. … Forums: r/technology : Employees Say OpenAI and Google DeepMind Are Hiding Dangers from the Public r/technews : OpenAI Employees Want Protections to Speak Out on ‘Serious Risks’ of AI | Current and former staffers of OpenAI and Google DeepMind say … r/artificial : OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance