/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Sources: OpenAI developed a watermarking method for detecting text written by ChatGPT with 99.9% reliability, but its launch has been mired in internal debates

Technology that can detect text written by artificial intelligence with 99.9% certainty has been debated internally for two years

Wall Street Journal

Discussion

  • @carnage4life Dare Obasanjo on threads
    OpenAI faces the classic big tech tension of trust & safety versus growth.  On one hand, you can create a tool that detects when people use ChatGPT to cheat on their homework.  But then everyone who wants to cheat on their homework will NOT use ChatGPT.  User growth goes 📉
  • @dseetharaman Deepa Seetharaman on threads
    There's so much more in this story by me & Matt Barnum about their decision-making process & the tension between their stated goals of transparency + helping educators and their need to grow & destigmatize AI's use.  Please read and let me know what you think.
  • @dseetharaman Deepa Seetharaman on threads
    One reason: an internal April 2023 survey showing nearly a third of ChatGPT users would use it less if it deployed watermarking.  This survey loomed large even after internal tests in 2024 showed watermarking didn't hurt ChatGPT's output.  Here's how OpenAI itself described the s…
  • @benfritz Ben Fritz on x
    Our first big scoop since starting the AI buro here at WSJ. Amazing reporting by @dseetharaman and @matt_barnum https://www.wsj.com/... [image]
  • @cynddl Luc Rocher on x
    OpenAI says they don't want to ostracise their users with watermarks to “protect them”. But I got confirmation that they send Oxford the complete listing of the staff and students using its services (Oxford argued it's not a breach of GDPR).
  • @lilianedwards Lilian Edwards on x
    If this is true it's remarkably irresponsible (hi @responsibleaiuk) Being able to detect AI text would be staggeringly useful in education. as well as to filter AI spam clickbait etc https://www.theverge.com/...
  • @simonw Simon Willison on x
    This caught my eye: “That same month, OpenAI surveyed ChatGPT users and found 69% believe cheating detection technology would lead to false accusations of using AI. Nearly 30% said they would use ChatGPT less if it deployed watermarks and a rival didn't.”
  • @kyleichan Kyle Chan on x
    Amazed this ChatGPT detection trick worked [image]
  • @rahll Reid Southen on x
    OpenAI has a tool that detects ChatGPT written content reliably and could solve numerous problems, but won't release it because it would affect their bottom line. They're lying to you when they say they care about helping humanity, and this is proof. https://www.wsj.com/... [imag…
  • @medievalhistory Chris Riedel on x
    So basically ChatGPT has for over a year been sitting on a software that can effectively test for cheating, but hasn't released it largely because without cheaters their business would suffer. https://www.wsj.com/...
  • @simonw Simon Willison on x
    Now that multiple vendors offer highly capable LLMs, it seems to me that watermarking is pretty much a dead-end - if someone wants to cheat they have multiple options for LLMs that don't watermark, which means vendors have little incentive to add watermarks to their own products
  • r/OpenAI r on reddit
    OpenAI won't watermark ChatGPT text because its users could get caught
  • r/Professors r on reddit
    For two years, OpenAI has internally had a tool that can catch AI cheating with 99.9% reliability.  But they refuse to release it.
  • r/technews r on reddit
    OpenAI says it's taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT
  • r/technology r on reddit
    There's a Tool to Catch Students Cheating With ChatGPT.  OpenAI Hasn't Released It.
  • r/technology r on reddit
    OpenAI says it's taking a ‘deliberate approach’ to releasing tools that can detect writing from ChatGPT
  • r/singularity r on reddit
    WSJ: There's a Tool to Catch Students Cheating With ChatGPT.  OpenAI Hasn't Released It.