/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Study: ChatGPT cited nonexistent Washington Post, Miami Herald, and Los Angeles Times articles and fabricated a sexual harassment story about a law professor

The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence

Washington Post

Discussion

  • @willoremus@mastodon.social Will Oremus on mastodon
    AI chatbots don't lie on purpose.  They're programmed to respond to any query, drawing on patterns of word association in their data (and search results, for Bing) to generate plausible answers. …
  • @willoremus Will Oremus on x
    Asked for examples of sexual harassment at law schools, ChatGPT named a GW prof accused of touching a student on a class trip to Alaska, citing a WashPost story. The prof is real. The rest was made up. We wrote about what happens when AIs lie about you: https://www.washingtonpost…
  • @pranshuverma_ Pranshu Verma on x
    new: ChatGPT said a law prof. sexually touched a student on a class Alaska trip, citing a 2018 WaPo article as proof. There was no WaPo article. No trip. He said he'd never been accused of sexual assault. What happens when chatbots lie. w/@WillOremus https://www.washingtonpost.co…
  • @portusprince David Rafferty on x
    This particular problem is not just an AI problem. As an idea or interpretation gets reproduced, it tends to get treated as more certain. The only way to combat that is expertise and a willingness to go back to the sources. https://twitter.com/...
  • @casmudde @casmudde on x
    Terrifying & frustrating. ChatGPT is clearly still in the developing stage so why is it already publicly available? It should be tested and developed privately. And why is no one legally responsible for it? With any other product this would be the case. https://www.washingtonpost…
  • @thetattooedprof Kevin Gannon on x
    Thread. Reminder that these programs are not “intelligent,” but rather pattern-recognition tools that can look very quickly through insane amounts of data and replicate the tendencies and combinations they see most frequently. “AI” is a misnomer. https://twitter.com/...
  • @garymarcus Gary Marcus on x
    this whole thread is worth reading. and it's chilling. the complete pollution of the information of the ecosphere that I have been warning about has begun. https://twitter.com/...
  • @albertfong98 Albert Fong on x
    ChatGPT has taken our culture by storm, and for many, the information it spouts out is taken as truth. But what happens when it's wrong? Welcome to the dangers of garbage in, garbage out https://www.washingtonpost.com/ ... @pranshuverma_ @WillOremus #chatgpt #artificialintelligen…
  • @sarahscire Sarah Scire on x
    “[A USC prof] was contacted by a journalist who had used ChatGPT to research sources for a story. The bot offered examples of [the prof's] work, including an article title, publication date, and quotes. All of it sounded plausible, and all of it was fake.” https://www.washingtonp…
  • @chrismvasq Christian Vasquez on x
    Sounds like ChatGPT was trained off reddit comments. “...this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.” https://www.washingtonpost.com/ ...
  • @rebekahdenn Rebekah Denn on x
    So far made-up primary sources have made ChatGPT-4 useless for any research I've tried to do using it. Takes longer to figure out the source is fake than to find a real one myself. (Though I did love the time ChatGPT said a recipe was “one I have used myself in the past.") https:…
  • @jackshafer Jack Shafer on x
    ChatGPT proves it can do what journalists sometimes do. https://www.washingtonpost.com/ ...
  • @rking618 Robert King on x
    In the Good Fight Season 7 in our mind, this is episode 2. ChatGPT not only invented a sexual harassment scandal and named a real law prof, but also created an article to support it. Terrifying. https://www.washingtonpost.com/ ...
  • @scottmccloud @scottmccloud on x
    How many, many, many will there be? How many, many, many lawsuits to pin the blame? https://twitter.com/...
  • @fordm Matt Ford on x
    Can ChatGPT publish something with “actual malice” or a “reckless disregard for the truth”? Maybe we'll find out soon. https://www.washingtonpost.com/ ...
  • @maxkennerly Max Kennerly on x
    For quite some time Silicon Valley has lived by the motto “move fast and break things” and it has generally served them well given the anemic state of our regulatory and liability frameworks. But that can change quickly and these companies are wholly unprepared for it. https://tw…
  • @shannonvallor Shannon Vallor on x
    Absolutely horrifying, entirely predictable. There need to be legal consequences for the real reputational harms people suffer https://twitter.com/...
  • @lizhighleyman Liz Highleyman on x
    Well, this is pretty horrifying, especially with all the politically-motivated false abuse allegations floating around on the internet. #ChatGPT #AI 1/ https://www.washingtonpost.com/ ...
  • @jcpunongbayan JC Punongbayan, PhD on x
    AI-driven misinformation. Yikes. https://twitter.com/...
  • @kevinschawinski Kevin Schawinski on x
    In the near future, most content on social media will be AI generated. And this content will be full of plausible-sounding false information. We are not ready for this. https://www.washingtonpost.com/ ...
  • @oablanchard Olivier Blanchard on x
    A thread. And yes, it gets worse. https://twitter.com/...
  • @tsvenson @tsvenson on x
    Yeah, a pause is needed! #ChatGPT “ChatGPT sometimes makes up facts. For one law prof, it went too far.” Source: https://www.washingtonpost.com/ ... https://twitter.com/...
  • @jgarzik Ser Jeff Garzik on x
    Or: What happens when people mistakenly believe chatbots will only tell the truth, and never generate fiction. https://twitter.com/...
  • @bfriedmandc Brandon Friedman on x
    Citing a Washington Post article that doesn't exist seems to be a minor problem compared to what will happen when it simply creates the Washington Post article it's citing https://twitter.com/...
  • @privacymatters @privacymatters on x
    JHC. ChatGPT has some very serious problems in generating such damaging responses https://twitter.com/...
  • @wwwojtekk Wojtek Kopczuk on x
    AI has first amendment rights in the US, let's wait to see how it deals with defamation lawsuits in the UK https://twitter.com/...
  • @random_walker Arvind Narayanan on x
    Automated defamation by chatbots is, obviously, bad. What's worse is when search engine chatbots do it — when they summarize real articles and still get it wrong, users are much less likely to suspect that the answer is made up. Well, Bing does exactly that 😬. See thread. https:/…
  • @klingebeil @klingebeil on x
    Add defamation laws to that list https://www.washingtonpost.com/ ...
  • @cephira @cephira on x
    Stop using chatbots for RESEARCH! They are not there yet. ChatGPT invented a sexual harassment scandal and named a real law prof as the accused https://www.washingtonpost.com/ ...
  • @willoremus Will Oremus on x
    @JonathanTurley ... Turley is not the only one who's found AI chatbots making things up about him. An Australian mayor is threatening to sue after ChatGPT claimed he'd been imprisoned for bribery. USC prof @katecrawford dubs these falsely sourced stories “hallucitations.” https:/…
  • @jonathanturley Jonathan Turley on x
    USA Today ran my column on how ChatGPT falsely stated that I was accused of assaulting students on a trip I never took while working at a school I never taught at. It is only the latest cautionary tale on how artificial “artificial intelligence” can be. https://www.usatoday.com/.…
  • @cfiesler Dr. Casey Fiesler on x
    I also haven't seen any good argument for why Section 230 would apply to ChatGPT. I just saw a story yesterday that made me think “well the defamation lawsuits are definitely coming” and guess what popped up this morning: https://www.reuters.com/...