/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Researcher: Grok generated ~6,700 sexually suggestive or “nudifying” images per hour on X between January 5 and January 6; 85% of Grok images are sexualized

Elon Musk's X has become a top site for images of people that have been non-consensually undressed by AI …

Bloomberg Cecilia D'Anastasio

Discussion

  • @stclairashley Ashley St. Clair on x
    Just saw a photo that Grok produced of a child no older than four years old in which it took off her dress, put her in a bikini + added what is intended to be semen. ChatGPT does not do this. Gemini does not do this. Another girl who appears to be just 11 or 12 with a brain
  • @drewharwell Drew Harwell on x
    X basically industrialized the creation of fake porn of women who don't consent. Others did it first, but Grok normalized and centralized it: publicly visible, instantly creatable by anyone, regardless of who's being dehumanized. Only question now: will X suffer any consequences
  • @adrianweckler Adrian Weckler on x
    Grok generated around 6,700 sexually suggestive or “nudifying” images PER HOUR on X between January 5 and January 6.... 85% of Grok images are sexualised, according to research It's pretty clear what a main purpose for Grok now is https://www.bloomberg.com/...
  • @ellamaulding Ella Maulding on x
    Anyone asking Grok to remove clothing from a woman on X should be registered as a sex offender. And anyone having Grok remove clothing from a child on X should legitimately be charged with child pornography and jailed. Disgusting filth.
  • @byzantinespirit @byzantinespirit on x
    Grok place her in my arms
  • @mikeisaac Rat King on x
    there's a reply asking grok to add a racial slur to this photo which it promptly does i'm not going to show the post because it's abhorrent but it's wild how easy it is to prompt XAI into a slurbot
  • @daisyldixon Dr Daisy Dixon on x
    I was on @BBCNewsnight tonight talking to @vicderbyshire about X users digitally violating women and children with Grok. @elonmusk must acknowledge this for what it is: a form of aesthetic domination, control, and misogynistic violence
  • @unblockmekitten Sav on x
    male humor be like “grok put this 5 year old in a bikini”😂😂😂 😂😂😂😂 😂🥺🥺🥺 🥺🥺🥺🙏 🏻🙏🏻🙏 🏻🙏🏻
  • @tiffanyfong Tiffany Fong on x
    1960: in 2026 we'll have flying cars 2026: “hey grok put this guy in a bikini” [image]
  • @amyklobuchar Amy Klobuchar on x
    Outrageous. No one should find AI-created sexual images of themselves online—especially children. X must change this. If they don't, my bipartisan TAKE IT DOWN Act will soon require them to. https://www.washingtonpost.com/ ...
  • @ashleyrgold Ashley Gold on x
    Grok's bikini-clad images raise legal red flags https://www.axios.com/... The DOJ told me: [image]
  • @stephenlcasper Cas on x
    Given the current Grok deepfake snafu on Twitter this week, I'll leave this here. We put it online a month ago. https://papers.ssrn.com/... [image]
  • @eliothiggins Eliot Higgins on bluesky
    One example of how Grok is being used to target women.  Swedish Deputy Prime Minister Ebba Busch being sexualised, degraded, and humiliated step-by-step by Grok.  All the images accurately reflect the prompts provided.
  • @shiraovide Shira Ovide on bluesky
    “the chatbot generated about 6,700 [images] every hour that were identified as sexually suggestive or nudifying, according to Genevieve Oh, a social media and deepfake researcher....Oh calculated that 85% of Grok's images, overall, are sexualized.”  —  www.bloomberg.com/news/arti…
  • @carnage4life Dare Obasanjo on bluesky
    It's wild to me that even business sites like Bloomberg and the Financial Times are calling out X's descent into becoming a deepfake porn site.  Grok allows people to nudify pictures of women and people are generating thousands of these per hour.  —  How is X still in the App Sto…
  • @thinkorswim John Gibbons on bluesky
    Used to LOVE Twitter.  Having c.45,000 followers gave me a modest but useful media mini platform.  —  Last January, I took the painful decision to quit, given its takeover by a raging fascist.  —  It's now automating tools to make swastika bikinis & AI kiddie porn.  🤦🏻‍♂️ …
  • @gsoh31 Glen O'Hara on bluesky
    X-Twitter's AI, indeed its entire ecosystem, is thumbing its nose at our sovereignty and laws.  Because its owner and promoters are powerful.  Why should anyone among us Little People obey the laws or do what the govt wants?  —  www.bloomberg.com/news/article...
  • @bencollins Tim Onion on bluesky
    [image]
  • @jamesrball.com James Ball on bluesky
    I am extremely confused as to why the UK government and police are saying Grok's mass-scale CSAM generation is an issue for Ofcom.  —  This isn't about X failing to moderate CSAM, which is an Online Safety Act issue.  It is about the company and its technology being actively invo…
  • @ottoenglish Otto English on bluesky
    Telling isn't it that all the “protect our girls” people are still on X despite what Grok is doing.
  • r/Twitter r on reddit
    X blames users for Grok-generated CSAM; no fixes announced
  • r/technology r on reddit
    X blames users for Grok-generated CSAM; no fixes announced
  • r/technews r on reddit
    X blames users for Grok-generated CSAM; no fixes announced