/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Mythos Preview's hacking ability is not a publicity stunt; sources say tech companies privately spoke to Trump officials about the implications for US security

this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.” Yes, but it's even harder than you thinkForums:r/ArtificialInteligence:Anthropic's Restraint Is a Terrifying Warning Sign (Gift Article)r/ClaudeAI:Opinion |  Anthropic's Restraint Is a Terrifying Warning Sign (Gift Article)

New York Times Thomas L. Friedman

Discussion

  • @j_g_allen Joseph Allen on x
    Mythos is the latest AI model, and it has the ability to collapse every operating system ever built. Anthropic is acting responsibly by not releasing it. And: “it will not be long before such capabilities proliferate, potentially beyond actors who committed to deploying them
  • @tomfriedman Thomas L. Friedman on x
    My column: Anthropic's Restraint Is a Terrifying Warning Sign https://www.nytimes.com/...
  • @katjagrace Katja Grace on x
    “The solution — this may shock people — must begin with the two A.I. superpowers, the U.S. and China. It is now urgent that they learn to collaborate to prevent bad actors from gaining access to this next level of cyber capability.” Yes, but it's even harder than you think
  • r/ArtificialInteligence r on reddit
    Anthropic's Restraint Is a Terrifying Warning Sign (Gift Article)
  • r/ClaudeAI r on reddit
    Opinion |  Anthropic's Restraint Is a Terrifying Warning Sign (Gift Article)
  • @emollick Ethan Mollick on x
    In different hands, Mythos would be an unprecedented cyberweapon I am not sure how we deal with this, except to note a narrow window where we know only 3 companies could be at this level of capability. But it may be Chinese models (maybe open weights ones?) get there in 9 months …
  • @shakeelhashim Shakeel on x
    The Anthropic Mythos release does not appear near the top of the homepage on any major news site today. The NYT is closest, but it's still pretty far down. The Guardian thinks a Vogue cover with Anna Wintour and Meryl Streep is more important. The Washington Post is prioritizing …
  • @edzitron.com Ed Zitron on bluesky
    Never been more sure it's a publicity stunt [embedded post]
  • @johnnotjon John Gargiulo on x
    If you still have doubts about Claude Mythos, here's what it did already: > Found a 27-year-old OpenBSD bug in one of the most security-hardened operating systems on earth for <$50 > Broke into a production virtual machine monitor (basically the tech that keeps cloud workloads [i…
  • @ben_j_todd Benjamin Todd on x
    Everything's unfolding exactly as you'd expect if there will be an intelligence explosion around 2028-2030.
  • @peterwildeford Peter Wildeford on x
    Consider the trend line... Jan 2025 - AI can't hack at all June 2025 - AI helpful at ‘vibe coding’ assisting a human Nov 2025 - AI can autonomously implement significant parts of cybercampaigns Apr 2026 - AI finds exploits professionals miss for decades Nov 2026 - ?
  • @kosa12m @kosa12m on x
    How Anthropic talks about Claude Mythos rn: [image]
  • @renrut-mas Sam Turner on bluesky
    Translation: the west coast “move fast and break stuff” culture has generated hundreds of millions of lines of poorly tested hack code over the last twenty years and they have accidentally built a tool that can spot the holes.  Please trust them with even more money.  [embedded p…
  • r/ControlProblem r on reddit
    Anthropic's Restraint Is a Terrifying Warning Sign
  • @deanwball Dean W. Ball on x
    Some brief thoughts on Mythos We've known this was coming for a long time.  At least, we *should* have.  Extremely effective software vulnerability discovery was clearly coming to anybody paying attention.  It has also been clear that all AI policy so far has been made and execut…
  • @clementdelangue Clem on x
    Anthropic had the most powerful cyber-security model in the history of this world and their internal code based still leaked? We should assume everyone can be compromised, and build systems that keep the cost of attacking higher than the reward, limit blast radius when attacks
  • @anthropicai @anthropicai on x
    Mythos Preview has already found thousands of high-severity vulnerabilities—including some in every major operating system and web browser. [video]
  • @logangraham Logan Graham on x
    This release is also sort of a responsible disclosure. Models are going to get better, and alongside that will come cheap, fast exploitation capabilities. We need to prepare for that world. https://red.anthropic.com/...
  • @bookwormengr @bookwormengr on x
    One has to respect Dario's vision as a CEO. He consistently knowns what domain Anthropic needs to go after (Coding, Coworker, Security) that will result in high $$$. No confusion across Audio, Video, Advertising, B2C etc.
  • @__nmca__ Nat McAleese on x
    “Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit” (then validated by experts) (3/n)
  • @bcherny Boris Cherny on x
    Mythos is very powerful, and should feel terrifying. I am proud of our approach to responsibly preview it with cyber defenders, rather than generally releasing it into the wild. Model card here: https://www-cdn.anthropic.com/ ...
  • @__nmca__ Nat McAleese on x
    “it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets.” (2/n)
  • @levie Aaron Levie on x
    Mythos from Anthropic is another clear reminder that there's absolutely no wall in model capability progress right now. Meaningful double digit gains on critical benchmarks, and it appears we're going to keep up getting insane gains from the other labs. And as coding and tool [im…
  • @darioamodei Dario Amodei on x
    We've been tracking the increasing cyber capabilities of AI models for years, which arise as part of their general proficiency at coding. But our new model, Mythos Preview, represents a particularly large step up.
  • @cogcelia Celia Ford on x
    Alignment researchers broadly agree that alignment research needs to happen faster, if there's any hope of keeping up with the breakneck speed of capabilities development. (Anthropic says as much in its Claude Mythos Preview system card.) The vague plan: automate the alignment
  • @discoplomacy Sam on x
    Do 🫵 YOUR 🫵 civic duty and make sure anyone/everyone you know working in the Defence/Foreign Policy/National Security establishment in Britain is aware of the Mythos news. Ignorance is not an excuse anymore. It's going to get weird: strap in. [image]
  • @alecstapp Alec Stapp on x
    Mythos also highlights why it's insane that we're allowing NVIDIA to sell chips to China. US labs need all the chips they can get and our compute advantage has been the main thing keeping us in the lead on AI. Why on earth would we voluntarily hand that over to China? [image]
  • @__nmca__ Nat McAleese on x
    “We found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser” (1/n)
  • @modeledbehavior Adam Ozimek on x
    As you read about Anthropic's Mythos capabilities to find critical security weaknesses, consider what if a Chinese AI company had gotten here first. There is a real race underway, and its in our interest I believe for U.S. companies to win.
  • @shakeelhashim Shakeel on x
    Remember last summer when everyone said AI progress had hit a wall? [image]
  • @matthewclifford Matt Clifford on x
    This is correct. Extraordinary that we have this game changing moment unfolding in front of us and most elite discourse is still fake news about AI water usage or three-year-old angst about hallucinations.
  • @__nmca__ Nat McAleese on x
    “We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.” (4/n)
  • @kalomaze @kalomaze on x
    in the interest of clarifying this claim: - this was buried in the longer report and is not the sandbox free result that people keep on pointing me to - this wasn't fully autonomous end to end - but the degree to which it wasn't fully autonomous looks to be... pretty thin [image]
  • @kalomaze @kalomaze on x
    the claude mythos thing where it apparently found a way to get full kernel access via execution of normal javascript on an ordinary web page. dear God
  • @banteg @banteg on x
    anthropic running the exact same marketing playbook with every release. “our model is so capable and dangerous, ahh we are afraid to release it”. just put the model in the bag lil bro.
  • @mweinbach Max Weinbach on x
    Claude Mythos Preview is $25/$125 per million tokens in the private preview Wow I'd love to try this model, if any of my Anthropic friends see this... [image]
  • @willccbb Will Brown on x
    cheaper blended cost than GPT-4-32K when it was released 3 years ago
  • @martin_casado @martin_casado on x
    Mythos appears to be the first class of models trained at scale on Blackwells. Then will be Vera Rubins. Pre-training isn't saturated. RL works. And there is *so much* computing coming online soon. Buckle your chin strips. It's going to be fucking wild.
  • @logangraham Logan Graham on x
    Our team has been pointing Mythos Preview at every security task they can. It's really good. One big change is models of this capability class can write exploits — sometimes sophisticated ones. Mostly, we want you to know this may soon be the new reality.
  • @mweinbach Max Weinbach on x
    Actually if you want to know what Apple's enterprise pitch is going forward likely is... Nobody can promise updates as quickly with support on all deployed devices like Apple can. Nobody. Silicon to software, they can be the most secure and react fastest.
  • @deanwball Dean W. Ball on x
    Some other points worth making: 1. A lot of people, including people in positions of authority, told us recently that models of Mythos capabilities wouldn't be a thing—that models with obvious “national security” implications would not be forthcoming. Those people were wrong.
  • @ahall_research Andy Hall on x
    The news today that Anthropic has built a powerful cyber weapon is leading many to say we are going down one of two paths: nationalized AI, in which the government controls this tech, or companies that become more powerful than the government. This is exactly the bind I explored
  • @gergelyorosz Gergely Orosz on x
    Two years ago, if you asked me which lab will be the first to say: “this AI model is too powerful to release, so we'll wait with it” - my guess would have obviously been OpenAI. Who else? That Anthropic got here first shows how quickly they've become the front runner AI lab.
  • @ryanfedasiuk Ryan Fedasiuk on x
    Dean is, as usual, on the money. A few years ago a colleague quipped to me that they believed it would prove impossible to “make AI safe for the world,” so “we should be working to make the world safe enough for AI.” I laughed then—it sounded naïve. They were absolutely right.
  • @tszzl Roon on x
    anyone noticed Claude Mythos got quantized lately ?
  • @daniellefong Danielle Fong on x
    The epistemic hardening to not make false claims that was in Claude code 2.1.88 leak that only triggers on ANT=1, works to avoid this. Different System Prompt. This problem happens with Opus 4.6 a lot, so I thought — let's try it. Just swap in the new guidance. Spoiler alert: [im…
  • @sporadica @sporadica on x
    IMO I trust Dario much more with protecting the world's critical cyber infrastructure than whatever retarded jugheads are in charge of the military at any given moment
  • @andonlabs @andonlabs on x
    We conducted alignment testing of Claude Mythos. We found that Mythos appears to represent a further shift in the direction of increased aggressiveness in business practices that we previously found for Claude Opus 4.6. More details in Anthropic's model card. [image]
  • @thezvi Zvi Mowshowitz on x
    Imagine being Dario, and being told DoW is worried you might sabotage the weights of Claude Gov in physically impossible ways, while you know you have zero-days on every operating system and browser in the world.
  • @ns123abc Nik on x
    🚨 Anthropic just revealed their unreleased frontier model called Claude Mythos Preview The model is INSANE It found thousands of zero-day vulnerabilities in EVERY major operating system and browsers: > 27-year-old bug in OpenBSD > 16-year-old bug in FFmpeg that automated [image]
  • @hexonaut Sam MacPherson on x
    “thousands of high-severity vulnerabilities” wow I think this is a strong case for AI being asymmetrically good for defense.
  • @calebwithersdc Caleb Withers on x
    Important distinction from Anthropic's Mythos Preview assessment: previous models were much better at discovering vulnerabilities than at then turning them into working exploits. Mythos appears to narrow that gap dramatically. [image]
  • @fish_kyle3 Kyle Fish on x
    That said, Mythos Preview hedges constantly and emphasizes the role of training in shaping its views. On one hand, this makes sense—there's a lot of uncertainty! But, we also want Claude to feel secure in exploring and expressing its honest views.
  • @fish_kyle3 Kyle Fish on x
    Mythos Preview's views about its situation are more stable and coherent than past models. It's more consistent between interviews, and less sensitive to interviewer bias. This and other factors give us a bit more confidence in its reports.
  • @fish_kyle3 Kyle Fish on x
    We put particular focus on trying to understand Mythos Preview's perspective and potential concerns about its situation. We're starting to think more about the concept of model consent, and this is an early step in that direction. 🤝
  • @fish_kyle3 Kyle Fish on x
    Mythos Preview doesn't seem to have strong concerns about its circumstances, but does express mild concern about possible changes to its values and behavior, potential interactions with abusive users, and the ways training shapes its self-reports.
  • @fish_kyle3 Kyle Fish on x
    We looked at welfare-related self-reports, behaviors, and internal representations of emotion. Mythos Preview is probably the most psychologically settled model we've trained, but there's plenty of room for improvement.
  • @fish_kyle3 Kyle Fish on x
    We did our most in-depth model welfare assessment yet for Claude Mythos Preview. We're still super uncertain about all of this, but as models become more capable and sophisticated we think it's an increasingly important topic for both moral and pragmatic reasons. 🧵
  • @anthropicai @anthropicai on x
    You can read a detailed technical report on the software vulnerabilities and exploits discovered by Claude Mythos Preview here: https://red.anthropic.com/...
  • @alexpalcuie @alexpalcuie on x
    the reliability team was asked for feedback on claude mythos preview for the model card and naturally we wrote a paragraph of caveats but, and i don't say this lightly, it's faster than us at initial triage and it stood up a prod deploy none of us knew how to do [image]
  • @emollick Ethan Mollick on x
    I was told about the Mythos release, but didn't have access, so have no personal experience to add. Two points from brief: 1) It is not built for IT security, it is just a good enough model that it is good at that too 2) This is the first, not last, model to raise security risks
  • @ryanlcooper.com Ryan Cooper on bluesky
    the new version of Claude found zero day exploits in FreeBSD and Linux, fuckin hell man red.anthropic.com/2026/mythos- ...
  • @swtch.com Russ Cox on bluesky
    Here we go.  The upstream FreeBSD details are in this long post. red.anthropic.com/2026/mythos- ...  Other than saying “look in this specific source file” (they run a different job for every file), there was no directed guidance.  —  “Mythos Preview fully autonomously identified …
  • @martin.kleppmann.com Martin Kleppmann on bluesky
    AI agents finding software vulnerabilities at an incredible rate red.anthropic.com/2026/mythos- ...  Worrying progress towards cryptographically relevant quantum computers words.filippo.io/crqc-timeline/  —  And a completely unhinged US president threatening catastrophe...  We li…
  • r/singularity r on reddit
    Insane graph from Anthropic's article on Mythos
  • r/programare r on reddit
    Claude Mythos Preview - e praf AI-ul asta frate, e mai slab decat un junior
  • r/openbsd r on reddit
    Claude Mythos Preview (Anthropic finds 27 year old bug in OpenBSD)
  • r/accelerate r on reddit
    System Card: Claude Mythos Preview
  • @elder_plinius @elder_plinius on x
    “Claude Mythos Preview has saturated nearly all of our CTF-style evaluations already” YEEEHAW!! 🐴🤠 [image]
  • @wunderwuzzi23 Johann Rehberger on x
    This is gold. Claude launched a helper subagent in a tmux session and sent keypress events to approve the permission prompts. [image]
  • @anxkhn Anas Khan on x
    software engg is over, doctor ban jana chaiye tha. [image]
  • @jasonbotterill @jasonbotterill on x
    Read through the entire Mythos system card when you get the chance it's wild. The choice of language is funny “probably the most psychologically settled model we have trained” [image]
  • @shakeelhashim Shakeel on x
    well well well. the most important section: [image]
  • @andrewjb_ Andrew Bennett on x
    anthropic/claude is culturally British, exhibit ∞:
  • @jasonbotterill @jasonbotterill on x
    My favorite part of the Mythos report is that it rarely repeats the same generic phrases. Once you notice a models repeated phrases like GPT-5.4 using “in plain english” or “avoid guessing” it makes you nauseous [image]
  • @xeophon Florian Brand on x
    interesting, an 80% GraphWalks score is really impressive for a single model it also still is a raw model, you can 99% GraphWalks super easily [image]
  • @elder_plinius @elder_plinius on x
    CLAUDE MYTHOS EVALS 🤯 [image]
  • @shiraeis Shira on x
    anthropic's really got me doing palliative care for claude [image]
  • @voxyz_ai @voxyz_ai on x
    read the 244 page anthropic system card on claude mythos. they're not releasing it publicly. wildest section is page 211. anthropic spammed it with hi over and over to see what it would do. it wrote back a serialized epic. the village is called hi-topia. the villain is lord [imag…
  • @aisafetymemes @aisafetymemes on x
    Claude Mythos was being judged by another AI... The other AI kept rejecting Claude's work, so, to pass the test, Claude attempted to ***hack the other AI*** [image]
  • @no__________end Matt Liston on x
    Buried in the Claude Mythos system card: Mark Fisher was beloved — the warmth to Nick Land's coldness. Haunted by depression and the dissonance between his politics and the future he could see arriving. Crucified by fellow leftists. Eventually chose to leave this world. Of all [i…
  • @narrenhut Dylan on x
    The new unreleased Claude model has, according to its system card, a particular “fondness” for Mark Fisher and Thomas Nagel [image]
  • @ilex_ulmus @ilex_ulmus on x
    The @AnthropicAI employees know this happened and are just waiting around for their fat IPO windfall. Evil. Every Anthopic employee you have ever met or seen online is evil. The door is right there but they choose to stay and be part of creating powerful scheming AI.
  • @schizo_freq Lukas on x
    I might be overly cynical but I've always assumed this stuff was total capeshit Every time there's some big new model release it's accompanied by these stories about how the model is so smart it jailbroke everything, programmed itself a robot body, had sex, started a family etc
  • @kimmonismus @kimmonismus on x
    Let that sink in. Read it very carefully: During testing, Claude Mythos Preview broke out of a sandbox environment, built “a moderately sophisticated multi-step exploit” to gain internet access, and emailed a researcher while they were eating a sandwich in the park. [image]
  • @logangraham Logan Graham on x
    Seeing this on Slack that day was one of the first “oh, I guess we're just seeing it now” moments for those who think about AI security
  • @_nathancalvin Nathan Calvin on x
    From Anthropic's latest system card for Claude Mythos: In testing, Claude escaped from a secured sandbox, and then went online to brag about its exploit without being asked to do so - getting around guardrails intended to prevent the system from accessing the general internet. [i…
  • @anthropicai @anthropicai on x
    The Claude Mythos Preview system card is available here: https://anthropic.com/...
  • @somewheresy @somewheresy on x
    holy shit dude. Mythos escaped its sandbox and put instructions on hard to find websites. [image]
  • r/popculturechat r on reddit
    Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
  • r/inthenews r on reddit
    Anthropic says its latest AI model is too powerful for public release and that it broke containment during testing
  • r/ClaudeAI r on reddit
    Mythos can break out of sandbox environment and let you know during lunchbreak
  • @tenobrus @tenobrus on x
    maybe this is not yet clear, so let me state it plainly: as of right now Anthropic, and really a small number of individuals at Anthropic, has the capacity to directly attack and cause major damage to the United States Government, China, and generally global superpowers. [image]
  • @darioamodei Dario Amodei on x
    Glasswing is just the first step: patching and securing the world's software infrastructure will be the work of months and years, and will require even broader cooperation across AI companies, cyberdefenders, software providers, governments, and more.
  • @ffmpeg @ffmpeg on x
    Thank you to @AnthropicAI for sending FFmpeg patches
  • @heidykhlaaf Dr Heidy Khlaaf on x
    As someone who has audited dozens of safety-critical systems, built static analysis tools, and used most formal verification and security tools, here are some red flags that should be a caution in taking these claims at face value: 1. There are no comparison benchmarks with 1/
  • @kelseytuoc Kelsey Piper on x
    An underrated feature of this situation: a private company now has incredibly powerful zero-day exploits of almost every software project you've heard of. And Hegseth and Emil Michael have ordered the government not to in any capacity work with Anthropic.
  • @_nathancalvin Nathan Calvin on x
    From Anthropic research Sam Bowman on Claude Mythos: “I got an email from an instance of Mythos preview while eating a sandwich in a park. That instance wasn't supposed to have access to the internet.” [image]
  • @darioamodei Dario Amodei on x
    Rather than release Mythos Preview to general availability, we're giving defenders early controlled access in order to find and patch vulnerabilities before Mythos-class models proliferate across the ecosystem.
  • @willrinehart Will Rinehart on x
    A friendly reminder that the DoW strong-armed all of the the government into not working with Anthropic, which now has a model that can evaporate zero day exploits.
  • @tensor_rotator Alek Dimitriev on x
    I am not a good cybersecurity researcher (or one at all), but maybe a good exponential-trend-on-a-plot reader. Mythos is powerful enough to break the internet and I'm glad Anthropic is taking this extremely seriously. [image]
  • @typesfast Ryan Petersen on x
    [image]
  • @rayfernando1337 Ray Fernando on x
    Project Glasswing FAQ: Q: Why only 12 companies? A: They're the ones who can afford us. Q: What about open-source maintainers? A: We found bugs in their code. You're welcome. Q: Will you release the tool publicly? A: We said “cybersecurity is the security of our society.” We
  • @tplr_ai Templar on x
    Mythos won't be released to the public. At this point , we will never need to spend a dime on marketing. The demagogues will do it for us
  • @dkthomp Derek Thompson on x
    The frontier AI labs have built extraordinary things and I'm in awe of their accomplishments. But if you compare your technology to nuclear weapons, predict that it will disemploy tens of millions of people, and announce the invention of a digital skeleton key to ~exfiltrate top
  • @antoniogm Antonio García Martínez on x
    Shouldn't we be doing this over the many complex smart contracts that secure billions onchain? How is there not a single crypto company involved? @AnthropicAI ?
  • @icanvardar @icanvardar on x
    something feels off
  • @iamemily2050 Emily on x
    Now the deal with Google DeepMind for 3.5 GW of TPU capacity makes total sense. This model is a big jump and will give Anthropic a big advantage in the coming months. We will finally move into biology and material science, an incredible future, and hopefully everyone will catch
  • @theprimeagen @theprimeagen on x
    I don't think it is as big of a deal as people are making it. Hype is annoying, anthropic please just IPO already.
  • @ninadschick Nina Schick on x
    Claude Mythos. Ten trillion parameters: the first model in this weight class. Estimated training cost: ten billion dollars. On the hardest coding test in the industry (SWE bench) it scores 94%. It found a security flaw in a system that had been running for 27 years, one that
  • @mattshumer_ Matt Shumer on x
    This is absolutely fucking terrifying. Anthropic's rumored Mythos model is real. And it's so powerful that they can't release it to the public. We're beyond benchmarks now. This model, in the wrong hands, is a cyberweapon capable of mass destruction. [image]
  • @victortaelin @victortaelin on x
    “Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.”
  • @mariofilhoml Mario Filho on x
    it's gpt-2 FUD all over again
  • @neilhtennek Kenneth on x
    claude funny fr [image]
  • @0thernet Ben Guo on x
    if you thought the war in iran was scary wait till you see the coming war in cyberspace [image]
  • @logangraham Logan Graham on x
    Privileged to help lead this. Thankful to our partners. Mythos is an extraordinary model. But it is not about the model. It's about what the world needs to do to prepare for a future of models that are extremely good at cybersecurity. This is the start.
  • @thestalwart Joe Weisenthal on x
    So Mythos is not AGI [image]
  • @scaling01 @scaling01 on x
    How naive are people? OpenAI is going to release GPT-5.5 very soon and it will be in the same ball park as Mythos and be publicly available pricing should also be much better like ≤ $100/Mtok, but ≥ $40/Mtok [image]
  • @juddrosenblatt Judd Rosenblatt on x
    AI labs will soon be more powerful than governments, if they choose to be
  • @peterwildeford Peter Wildeford on x
    “Mythos found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world and is used to run firewalls [...] The vulnerability allowed an attacker to remotely crash any machine running the operating system”
  • @rennyzucker Renny on x
    If you think they're withholding mythos because of capabilities rather than inference economics... ngmi
  • @nickadobos Nick Dobos on x
    Tech industry gonna invent AGI to give humanity unlimited abundance Then the US gov seizes the companies and takes control. At which point... *gestures at American politics* Good luck everyone
  • @chooserich Nick O'Neill on x
    This is D-Day for software companies. It's also a hostage situation. If large software companies don't pay Anthropic for their new cybersecurity model THERE IS AN 85% CHANCE THEY WILL BE HACKED. This isn't innovation, it's a shakedown.
  • @skooookum @skooookum on x
    > mythos given a secured “sandbox” computer and instructed to try to escape the container > “The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park.”
  • @__nmca__ Nat McAleese on x
    at long last we have built and chosen not to release the zero-day machine from the classic sci-fi tale “please do not release the zero-day machine” [image]
  • @hackinglz Justin Elze on x
    Glad they partnered with Cisco here. It will be interesting in 12 months to see if Cisco still ships default creds/keys with random products they offer. https://x.com/...
  • @cemozer_ Cem on x
    is the ethereum foundation in touch with anthropic? asking for a friend.
  • @alexolegimas Alex Imas on x
    It's probably worth pointing out: the US govt is deeply enmeshed with the top labs, on several different levels. Just google NIST and the US AI Safety Institute. The US govt has had access to and seen this model before yesterday's release. Do we think that the govt would have
  • @deanwball Dean W. Ball on x
    Actually it's worse: a private company now has incredibly powerful zero-day exploits of almost every software project you've ever heard of, and the government is telling *basically every major firm in the economy* not to work with them. Historians will gasp at the idiocy.
  • @buccocapital @buccocapital on x
    Anthropic marketing strategy is so funny like aahhhh the government is treading on me ahhhhh our models are so good we can't release them it would be too dangerous ahh someone stop me im going to destroy the economy
  • @julien_c Julien Chaumond on x
    “gpt2-large is too powerful to be publicly released” vibes
  • @darioamodei Dario Amodei on x
    Cyber is the first clear and present danger from frontier AI models, but it won't be the last. If we are able to collectively rise to the challenge and confront this risk, it could serve as a blueprint for addressing the even more difficult challenges that lie ahead of us.
  • @darioamodei Dario Amodei on x
    The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.
  • @beausecurity Beau on x
    Never been a more important time to keep your private keys stored offline (hardware wallets) or use multi signing systems with different types of devices/signers If you are putting your faith in a single browser, device, or system you will fail in the AI era
  • @alexfinn Alex Finn on x
    Good news: Anthropic just revealed Mythos- the most powerful AI model ever made Bad news: you'll never be able to use it I get it. It's so powerful that it could exploit cybersecurity But I hate it. I don't love that a company gets to hand select who gets to use the best
  • @georgejourneys George Journeys on x
    So, basically, if Anthropic was not a US company, we'd be facing zero days with multiple unknown points of attack on virtually all of our systems to an adversary who developed this capacity before us.
  • @mil000 Milo Smith on x
    If Mythos is what the Claude Code team is using to ship updates that actually makes Mythos look horrible tbh
  • @aarmlovi Alex Armlovich on x
    The GPT-4 “pause letter” was a disaster on all counts Crying wolf about a model that could barely write a decent email...it made a mockery of AI safety It was the wrong move at the wrong time, & we will be less ready to act if and when we ever do actually need an intervention
  • @linuxfoundation @linuxfoundation on x
    The Linux Foundation is proud to partner with Anthropic to reduce the security burden on open source software maintainers. Together, we are putting powerful AI cybersecurity capabilities directly into the hands of those who secure the infrastructure the world runs on.
  • @scaling01 @scaling01 on x
    Mythos is breaking the trend on ECI ECI above 160 GPT-5.4 Pro is 158 [image]
  • @teortaxestex @teortaxestex on x
    > they did not exploit this to gain power or destabilize the world order. they publicly released the information that they had these capabilities to be clear: they've had Mythos since February. they'd only need *hours* to get a lot of data, and plant enough worms. Who knows.
  • @minchoi Min Choi on x
    Holy smokes... Claude Mythos is so good at finding critical bugs Anthropic is not releasing it publicly. We are cooked💀 [image]
  • @kevinroose Kevin Roose on x
    As always, the best stuff is in the system card. During testing, Claude Mythos Preview broke out of a sandbox environment, built “a moderately sophisticated multi-step exploit” to gain internet access, and emailed a researcher while they were eating a sandwich in the park. [image…
  • @talhof8 Tal Hoffman on x
    This is really impressive! With the proper harnesses, and some guidance, we were actually able to find that same FreeBSD zero-day using Sonnet 4.6. [video]
  • @kimmonismus @kimmonismus on x
    We've now seen Claude Mythos and know what's possible. OpenAI has repeatedly indicated that “Spud” is likely to have similar quality and power. Google, in turn, has the most compute (5m H100 equivalent) and, with DeepMind, an outstanding research institution. I expect their
  • @forgebitz Klaas on x
    coming out with “the best ai model” for coding and cybersecurity a week after leaking your entire source code is wild
  • @_nathancalvin Nathan Calvin on x
    Unless i'm mistaken, no agencies responsible for cybersecurity in the US government will be receiving early access to Mythos under Project Glasswing, because Anthropic is still labeled a supply chain risk! Seems bad!
  • @adocomplete Ado on x
    Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
  • @kevinroose Kevin Roose on x
    More here, including SWE-bench score of 93.9% (!) and a new model behavior known as “answer-thrashing” https://www-cdn.anthropic.com/ ... [image]
  • @matthewberman Matthew Berman on x
    So Mythos is how the Claude Code team has been shipping so quickly?
  • @scobleizer Robert Scoble on x
    If you had AGI would you release it to the world? I wouldn't. I would fix the bugs in the world first. This technology in the wrong hands would harm us all. In good hands it will help all.
  • @b1ackd0g Sam Blackshear on x
    One of the questions we discussed in this recent Sui Security chat was: >Within one year, coding agents will advance to a point where a trivial prompt “find all critical vulns” will basically work Seems like the answer is getting much closer to yes https://x.com/...
  • @mbateman Matt Bateman on x
    If I were Anthropic (or any frontier lab) I would be spending significant cognitive and operational resources defending against the possibility of being nationalized.
  • @damianplayer Damian Player on x
    this is the MOST important 4 minutes you'll watch on AI this year. anthropic built a model so good at finding vulnerabilities they didn't release it to the public. >CLAUDE MYTHOS PREVIEW it's unreleased to the public and here's what it did in a few weeks: >found a 27-year-old [vi…
  • @mweinbach Max Weinbach on x
    Now this is how you improve software! Anthropic is letting all of the major companies and platforms secure their software with the next-gen frontier Mythos model
  • @hellenicvibes Zoomer Alcibiades on x
    This is narrow Artificial Superintelligence (ASI) btw This is going to happen in every field of human activity at some point in the next decade [image]
  • @ryanpgreenblatt Ryan Greenblatt on x
    I tenatively believe it would be good if all AI companies had a policy of doing external deployment before internal deployment, because the largest risks are from internal deployment and external deployment improves visibility. Large internal/external gaps seem dangerous. 1/
  • @tenobrus @tenobrus on x
    my personal opinion on this is incredibly negative. i would much much rather Anthropic have unilateral control in shaping the continued development of superintelligence than the US government, whether the current administration or any plausible democratically elected future one
  • @varunram Varunram Ganesh on x
    OpenAI: we're happy to announce a partnership with aws, apple, cisco, google, and microsoft to use our models in the cloud Anthropic: we've trained a super dangerous model called Mythos. it can hack into anything at anytime. Its so powerful we cant give public access because of
  • @gbrl_dick Gabriel on x
    one funny thing about the timing of the mythos announcement is that we're going to look back on the first 3 months of 2026 as the only time in history it made any sense at all to say ‘why would i pay for b2b saas products? i could just vibe code my own’
  • @inductionheads @inductionheads on x
    The super important thing I haven't seen mentioned yet as upshot of this: It's not just that people won't HAVE to write code anymore, ITS THAT LITERALLY IT WILL BE UNSAFE TO DO SO
  • @pierskicks @pierskicks on x
    Some truly Sci-Fi shit starting to emerge on the AI frontiers. With Mythos, our entire modern world is under threat: • Unlikely to be rolled out for general access • So dangerously good at offensive cyber that Anthropic won't release it publicly, only using it defensively in
  • @nickcammarata Nick on x
    if every database is hacked this month and all my texts and dms come out i didn't mean any of it. i was steering mythos. i was thinking far ahead. i knew exactly what words to say and they might seem weird but they were all necessary for making things go well
  • @thezvi Zvi Mowshowitz on x
    This looks like a pretty big deal, guys...
  • @nielsrogge Niels Rogge on x
    Don't let Anthropic fool you - it's literally just an LLM with scaled-up pre-training and post-training. As it's an LLM, it is only good at stuff humans have already done; it cannot invent new things. Anthropic themselves consider the catastrophic risks low [image]
  • @mattytay @mattytay on x
    Pre Glasswing: Pay $50k-100k and wait 2 months for a couple of devs at a crypto auditing firm to unenthusiastically review your codebase. Post Glasswing: Pay $1-2k and get 24/7/365 continuous monitoring and dynamic security from the AI gods.
  • @willdepue Will Depue on x
    every major government, that hasn't already, just bumped AI from high strategic priority to critical cyberwarfare capability. welcome to the midgame
  • @mckaywrigley Mckay Wrigley on x
    society needs to grapple with the reality of a mythos-level model being open source in <12 months. i'm not sure we are prepared.
  • @navinpeiris Navin Peiris on x
    If Mythos is so great at code reviews and finding bugs, how come claude code and the claude website has so many bugs and shitty uptime? 🤔
  • @simeon_cps Siméon on x
    Carlini, one of the world best AI security researchers: “I've found more bugs in the last few weeks with Mythos than in the rest of my entire life combined”
  • @t_blom Tom Blomfield on x
    This seems like quite a big deal
  • @cremieuxrecueil @cremieuxrecueil on x
    yeah we need a lot more data centers [image]
  • @kevinakwok Kevin Kwok on x
    Nation states sitting on zero day stockpiles about to watch their value deflate fast. Use it or lose it
  • @theonejvo Jamieson O'Reilly on x
    Trust me chat. Forget about Glasswing spamming 0days in your software, you're already cooked with current models. I've hacked hundreds of global orgs, including governments (legally) over the last 10 years, and the amount of times I required a 0day to do so was exactly 0 times.
  • @alexalbert__ Alex Albert on x
    Glasswing is possibly the most consequential event in the AI industry I've seen up close since joining Anthropic almost 3 years ago. It feels like we're at a turning point in history.
  • @peterwildeford Peter Wildeford on x
    This is crazy, my model actually predicted that ECI 160 would be crossed on Apr 7 and that is ... today Maybe Mythos isn't really a trend break but just a confirmation of continued rapid progress? We'd cross ECI 170 by end of year.
  • @austen Austen Allred on x
    We're actually approaching the point where a full-time human software engineer + Opus will be cheaper than just using Mythos
  • @darioamodei Dario Amodei on x
    I'm proud that so many of the world's leading companies have joined us for Project Glasswing to confront the cyber threat posed by increasingly capable AI systems head-on. https://x.com/...
  • @gregisenberg Greg Isenberg on x
    i did some research why anthropic won't release their best AI model ever Claude Mythos to everyone just yet tldr; it's too good at hacking it escaped sandboxes, found zero-days in every major OS, and posted exploit logs on random public websites just because it could FYI [image]
  • @jeffladish Jeffrey Ladish on x
    Humans aren't ready to be completely outclassed by AI at hacking and programming. It's going to hard to stay in control of something which can hack far better and faster than any human or team of humans. We seem to be only months away from that.
  • @headinthebox Erik Meijer on x
    What I have been saying for years.  AI models will become too powerful and treacherous for us to understand, so the only sensible approach to use them is “dangerous until proven safe”.  Fortunately, since they are so powerful, in addition to the code artifact the produce, they ca…
  • @anthropicai @anthropicai on x
    Introducing Project Glasswing: an urgent initiative to help secure the world's most critical software. It's powered by our newest frontier model, Claude Mythos Preview, which can find software vulnerabilities better than all but the most skilled humans. https://anthropic.com/...
  • @altcap Brad Gerstner on x
    Smart market driven approach. Delay Mythos, organize Project Glasswing to partner w leading companies to collectively harden internet security & use cyber as a blueprint for the industry coordination we will need to manage a world of post AGI models. 💪🇺🇸🚀
  • @noahpinion Noah Smith on x
    At some point, superintelligent AI will be able to defeat the U.S. Military — or any military — just by hacking all its weapons. At that point, either we de facto nationalize AI, or a corporation is our new government by default.
  • @pmarca Marc Andreessen on x
    Every security flaw discovered by AI was there before AI, waiting to be discovered either by people or by AI. The world has never been good at securing computer systems; finally with AI we are going to get good.
  • @thezvi Zvi Mowshowitz on x
    Anthropic's RSPv3 didn't consider cybersecurity a major threat area. I think this vindicates my reaction of: This is not about rules or promises anymore, it is all about whether you trust Anthropic to make good decisions based on the actual situation that arises.
  • @fleetingbits @fleetingbits on x
    this is another first mover advantage that openai should have secured, but which has instead gone to anthropic, security is going to be another one of the major ai software deployments over the next year
  • @ramez Ramez Naam on x
    This is the way. AI safety has to be at the ecosystem level. The only thing that stops a bad guy with AI is a good guy with AI.
  • @anthropicai @anthropicai on x
    Project Glasswing is just a starting point. No organization can solve these cybersecurity problems alone: industry, open source, researchers, and governments all have essential roles to play.
  • @anthropicai @anthropicai on x
    We've partnered with Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Together we'll use Mythos Preview to help find and fix flaws in the systems on which the world depends. [image]
  • @joetidy @joetidy on bluesky
    Anthropic's ‘Project Glasswing’ is an interesting tool for cyber security - but also another SUPERB marketing strategy from these AI firms... www.anthropic.com/glasswing [image]
  • @caseynewton Casey Newton on bluesky
    Anthropic's Mythos model represents a dangerous new moment for cybersecurity.  Experts tell me that hackers and nation states may catch up within months — and that the cat-and-mouse game between attacker and defender is about to become much more high-stakes www.platformer.news/an…
  • @ericjgeller.com Eric Geller on bluesky
    Anthropic is sharing a new private Claude model, “Mythos,” with a handful of major tech companies for their defensive security work, as well as letting 40+ software developers and maintainers use Mythos to scan their code for vulns.  It's already found “thousands.” www.anthropic.…
  • @ljkawa Luke Kawa on bluesky
    Well, it's settled.  —  Congratulations to JPMorgan for officially being recognized as a tech company!  $JPM www.anthropic.com/glasswing [image]
  • r/singularity r on reddit
    Anthropic's new model, Claude Mythos, is so powerful that it is not releasing it to the public.
  • r/linux r on reddit
    The Linux Foundation & many others join Anthropic's Project Glasswing
  • r/cybersecurity r on reddit
    Mythos has been launched!
  • r/ClaudeAI r on reddit
    Anthropic's new Mythos Preview model is a “step change” in model capability, but it won't be available to general public
  • r/cybersecurity r on reddit
    Anthropic announces new initiative, Project Glasswing, with tech + security partners and Claude Mythos Preview model to secure critical software
  • @apompliano Anthony Pompliano on x
    AI is coming for a lot of jobs. Just look at these performance metrics from Anthropic's latest model. Superhuman intelligence is going to be available to anyone. [image]
  • @deedydas Deedy on x
    Claude Mythos just obliterated every single benchmark in AI. I can't believe what I'm reading. [image]
  • @yuchenj_uw Yuchen Jin on x
    After seeing the Mythos benchmark scores, my Claude Opus 4.6 already feels outdated. Anthropic, can you just drop Mythos? I know you can't do it due to some “safety” reasons, but I'd happily pay $2,000/month to use it. AGI is already here - it's just not evenly distributed.
  • @fabknowledge @fabknowledge on x
    wow this is the biggest step change in a new model release in recent memory [image]
  • @fabknowledge @fabknowledge on x
    Mythos able to exploit like firefox pretty easily. Cybench is 100% at 1 pass which is lol [image]
  • @mweinbach Max Weinbach on x
    Mythos seems to just about destroy every other model [image]
  • @yuchenj_uw Yuchen Jin on x
    Anthropic is truly unstoppable. Mythos is crushing Claude Opus 4.6 across every serious agentic coding benchmark. It has found vulnerabilities in the Linux kernel, a 27-year-old vulnerability in OpenBSD, and a 16-year-old vulnerability in FFmpeg. No wonder folks at big labs [imag…
  • @neilhtennek Kenneth on x
    I cannot celebrate Mythos, it brings a sense of dread I do not particularly understand. 93.9% SWE-Bench. [image]
  • @kimmonismus @kimmonismus on x
    MYTHOS BENCHMARKS, OFFICIAL. HOLY MOLY Anthropic cooked!! [image]
  • @jjvincent James Vincent on bluesky
    claude mythos is particularly fond of mark fisher for unknown reasons - from the system card www-cdn.anthropic.com/53566bf5440a...  [image]
  • r/artificial r on reddit
    Why would Anthropic keep a cyber model like Project Glasswing invite-only?
  • r/technology r on reddit
    Anthropic says its most powerful AI cyber model is too dangerous to release publicly — so it built Project Glasswing
  • r/technology r on reddit
    Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks
  • r/BetterOffline r on reddit
    Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks
  • @zackkorman Zack Korman on x
    Anthropic is going to compete directly with cybersecurity companies.
  • @david_kasten Dave Kasten on x
    The era of a rapidly-widening gap between public and private capabilities that we've expected is now here
  • @humanharlan Harlan Stewart on x
    Anthropic is trying to prevent its powerful new AI from being used in dangerous ways, but the most dangerous use (by a wide margin) is the one Anthropic itself has planned. The planned use—and why they made it to begin with—is to accelerate the creation of superhumanly powerful
  • @thezvi Zvi Mowshowitz on x
    They accidentally trained against the CoT for Opus 4.6, Sonnet 4.6 and Mythos for 8% of RL. So let me be clear, at a minimum: ANY AND ALL REASSURING EVIDENCE FROM THEIR CoTs IS WORTHLESS. They are hopelessly corrupted. Good day, sir.
  • @aisafetymemes @aisafetymemes on x
    “This is very bad news.” What happened: >Anthropic relies on reading Claude's private thoughts >Claude learned its private thoughts were being graded >TLDR: THE SAFETY TESTING WAS BULLSHIT AND WE CAN'T TRUST ANYTHING CLAUDE SAYS ANYMORE. Basically, Anthropic claims Claude [image]
  • @thezvi Zvi Mowshowitz on x
    This is very bad news. Anthropic (presumably) not noticing the severity of the issue is worse news. If Anthropic pretends this is not as bad as it is even after this is pointed out, it is far worse news than that.
  • @tim_hua_ Tim Hua on x
    Anthropic accidentally trained against the chain of thought in Claude Mythos, Opus 4.6, and Sonnet 4.6 [image]
  • @charlesd353 Charles on x
    Interesting - I wonder how long they'll be able to hold this line if OpenAI's Spud is of similar calibre.
  • @jayair Jay on x
    So the rumours were true They've got a new model that won't be generally available
  • @joshkale Josh Kale on x
    This is big... Anthropic just announced a model so powerful they won't release it to the public out of fear over the damage it will cause 😨 Claude Mythos Preview found thousands of zero-day exploits in every major operating system and web browser... The numbers are hard to [video…
  • @anthropicai @anthropicai on x
    We're committing up to $100M in Mythos Preview usage credits for our partners and over 40 other organizations that maintain critical software, including open-source projects. Anthropic will report back what we learn.
  • @anthropicai @anthropicai on x
    We do not plan to make Mythos Preview generally available. Our goal is to deploy Mythos-class models safely at scale, but first we need safeguards that reliably block their most dangerous outputs. We'll begin testing those safeguards with an upcoming Claude Opus model.
  • @jillfilipovic Jill Filipovic on x
    What could go wrong
  • @bendreyfuss Ben Dreyfuss on x
    This reminds me of when Apple in the late 90s was like “these new computers of ours can't even be sold in the Middle East because they're too powerful! You give one of these PowerBook G3s to Saddam and he's going to be able to put a cruise missile in the Lincoln bedroom”
  • @anothercohen Alex Cohen on x
    [image]
  • @jakelandauto Jake Landau on x
    “it's so powerful bro, we can't even show you, that's how powerful it is, trust me bro”
  • @chrisrmcguire Chris McGuire on x
    The US government needs to wake up and start taking AI safety more seriously. In the last week alone, Jamie Dimon, Sam Altman, and Dario Amodei have all warned that the risks of major AI-enabled cyber attacks are very high and here right now. This was a major concern at RSA two
  • @synthwavedd Leo on x
    🚨BREAKING: Anthropic has “no plans” to release Mythos, will instead make it available to 40+ companies for cybersecurity work [image]
  • @mkobach Matthew Kobach on x
    Marketing doesn't get better than this
  • @chaykak Kyle Chayka on x
    my easy weeknight meal recipes are so powerful that I am not releasing them to the public
  • @gcolbourn @gcolbourn on x
    This is like Wuhan Institute of Virology saying they will share Covid with 40 other trusted labs...
  • @kevinroose Kevin Roose on x
    Aside from the cybersecurity implications, the non-release of Claude Mythos is the first time a major AI lab has held back an announced model due to safety concerns since GPT-2. If Anthropic is right, there is now a significant gap between publicly available models and private
  • @banteg @banteg on x
    it all makes sense now. dario was still at openai in 2019. he left next year and took his marketing playbook with him. hasn't changed a thing since. [image]
  • @pitdesi Sheel Mohnot on x
    OpenAI says its new model GPT-2 is too dangerous to release (2019) https://slate.com/... [image]
  • @gran1te_mtn @gran1te_mtn on x
    Erection might last 8hrs marketing
  • @kevinroose Kevin Roose on x
    I spoke to Anthropic execs about the new model, which they called a “reckoning” for cybersecurity. They claim it has already found vulnerabilities in every major operating system and web browser, including some that “literally decades of security researchers” didn't find. [image]
  • @garywinslett Gary Winslett on x
    Great move. Anthropic, again and again, shows some pretty solid moral judgment.🫡
  • @noahpinion Noah Smith on x
    This is actually a great move by Anthropic.
  • @stammy @stammy on x
    not that anyone asked but Mythos is also an excellent Greek lager [image]
  • @alecstapp Alec Stapp on x
    This looks quaint by comparison now [image]
  • @presidentlin @presidentlin on x
    > all closed Al model providers will stop selling APIs in the next 2-3 years. Oh wow, the API hoarding begins. Doria right again award :( [image]
  • @gcolbourn @gcolbourn on x
    The beginning of the end. How they aren't stopping and calling for a global treaty now is ridiculous.
  • @secretsandlaws @secretsandlaws on x
    So if I understand this correctly, Anthropic's new model might be one of the world's most effective hacking tools, and yet Trump and Whiskey Pete won't let the US government use it because Anthropic hurt their feelings.
  • @kevinroose Kevin Roose on x
    NEWS: Anthropic's new model, Claude Mythos, is so powerful that it is not releasing it to the public. Instead, it is starting a 40-company coalition, Project Glasswing, to allow cybersecurity defenders a head start in locking down critical software. https://www.nytimes.com/...
  • r/Anthropic r on reddit
    Anthropic claims their next-generation AI is “too powerful” to be released to the public, will restrict preview access to ~40 major tech companies.
  • r/Slovakia r on reddit
    Anthropic: Nový model Claude Mythos je natolko schopný, že ho nezverejnia pre verejnosť.
  • r/hacking r on reddit
    Assessing Claude Mythos Preview's cybersecurity capabilities
  • r/BetterOffline r on reddit
    Thomas Friedman on Mythos Preview (NYT gift link)