Claude Mythos Preview found thousands of zero-day vulnerabilities in every major operating system and web browser, autonomously exploited a 17-year-old bug in FreeBSD, and wrote a four-vulnerability browser exploit chain from scratch. Anthropic's response: "We're not releasing this to the public." The internet's response: collective screaming. Our response: this article.
The AI in question is called Claude Mythos Preview. It was built by Anthropic — the company whose tagline is "AI safety for the long-term benefit of humanity," which was presumably written before they built something that can comprehensively compromise every major operating system on earth without a human holding its hand. The company, valued at $380 billion and recently reporting $30 billion in annualised recurring revenue, has announced with great solemnity that it will not be releasing this particular model to the public.
This is either an act of extraordinary corporate responsibility — building the most dangerous cyber tool in history and then having the restraint not to sell it to anyone with a credit card — or the most expensive, sophisticated marketing campaign ever conceived. Possibly both. Probably both. It is, either way, working.
What Claude Mythos Actually Did, In Plain English, So You Can Be Scared Appropriately
Let us translate what Anthropic's blog posts say into language that a normal person, rather than a security researcher who sleeps with a copy of Bruce Schneier's book under their pillow, can understand and be appropriately terrified by.
Found thousands of zero-day vulnerabilities. "Zero-day" means bugs that were previously unknown — not to hackers, not to anyone. Flaws that the people who wrote the software didn't know existed. Mythos found thousands of them. In every major operating system. In every major web browser. The systems you are using right now, to read this article, have holes in them that this AI found before you or anyone you've ever met knew they existed.
Exploited a 17-year-old FreeBSD bug. Fully autonomously. A bug hidden in FreeBSD since 2009 — when the iPhone had just launched, when Twitter was two years old, when Rahul Gandhi still had political credibility — was found and exploited by Mythos with no human involvement after the initial instruction. The AI essentially told a server "hello" and then said "I own you now."
Found a 27-year-old OpenBSD bug. Twenty-seven years. The bug was sitting there since 1999. Y2K came and went. We panicked about the millennium. We built the entire modern internet. The bug waited. Mythos found it in what Anthropic's researcher described as "a couple of pieces of data sent to any OpenBSD server." That is a polite way of saying "crash any server you like, from anywhere on the internet, for free, as a treat."
Wrote a browser exploit chaining four vulnerabilities. Not one bug. Four bugs, independently minor, chained together by the AI into a sequence that escapes both the browser's renderer sandbox AND the operating system sandbox. This is the security equivalent of picking the front door lock, the back door lock, the deadbolt, and the chain — simultaneously — without ever having been shown a lockpicking tutorial.
One researcher's testimony: Nicholas Carlini of Anthropic's Red Team said: "I've found more bugs in the last couple of weeks than I found in the rest of my life combined." Nicholas Carlini is a professional security researcher whose job is finding bugs. His entire life. Combined. Two weeks of Mythos.
"Mythos Preview is currently far ahead of any other AI model in cyber capabilities and presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
— Anthropic's own draft blog post, accidentally made public before the official announcement, in the finest tradition of technology companies that build things capable of accidentally leaking any document ever writtenProject Glasswing: Saving The World By Giving The Scary AI To Amazon, Apple, Google and Microsoft
Anthropic's solution to having built the most capable hacking tool in human history is elegant in its corporate logic: give it to the biggest companies on earth and ask them to use it to find their own vulnerabilities before actual bad actors do. The initiative is called Project Glasswing — named after a transparent-winged butterfly, which is either a beautiful metaphor for radical transparency in AI development or evidence that Anthropic's naming team was having a very different meeting from Anthropic's security team.
The partners in Project Glasswing include: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA. These are, collectively, the companies whose software runs essentially everything on earth. If Mythos finds vulnerabilities in their systems — and it already has — and those get patched before a bad actor gets hold of a similarly capable model, then the world is measurably safer. This is a genuine and admirable objective. It is also, as security legend Bruce Schneier dryly noted, "very much a PR play by Anthropic." The two things are not mutually exclusive. Anthropic has raised this to an art form.
"OpenAI, presumably pissed that Anthropic's new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won't be released to the general public, either."
— Bruce Schneier, security researcher and author, on his blog, making the observation that AI companies are now competing not just on capability but on the prestigious new metric of "who has the scariest thing they're bravely not releasing"This is, genuinely, the new marketing frontier. Anthropic announces something terrifying it will not release. OpenAI immediately announces it also has something terrifying it will not release. We have entered the age of competitive restraint — a corporate arms race in which the prize is not market share but rather the moral high ground of the most responsibly dangerous thing. Both companies are competing for the title of "company most bravely holding back the apocalypse." It is the most Silicon Valley sentence that has ever been written and it is somehow real.
Who Is Scared and Why — A Comprehensive Taxonomy of the Terrified
The Part Where We Note That Anthropic Is Also The Company Whose Popular Model Recently Got Too Lazy To Do Complex Tasks
In a subplot that belongs in a workplace comedy, Anthropic — the company that has simultaneously built the most dangerous cyber AI ever and responsibly declined to release it — also managed, in the same quarter, to quietly make its popular Claude Code tool significantly less capable by reducing its default "effort" level to save on compute costs, without telling anyone. Developers noticed. A senior director of AI at AMD called it "unusable for complex engineering tasks." Users revolted. Reddit threads appeared. GitHub analyses were posted. Anthropic scrambled to respond.
So the same company has built: an AI too powerful to release to the public because it will hack everything, AND an AI that had its effort secretly reduced because it was getting expensive. The most dangerous AI in history and the laziest AI in history are both Claude. They are built by the same people. One of them hacks FreeBSD in seconds. The other has been quietly coasting on your complex engineering tasks since February and hoping you wouldn't notice. You noticed.
"The situation is this: Anthropic's secret model can hack every computer on earth. Anthropic's public model has been quietly doing less work since February. One of them is not living up to expectations. We will let you decide which one."
— Deep Throat Sharma, the BreakingBakwas technology correspondent, who has changed all passwords and is now writing from a typewriterAnthropic's ARR is $30 billion. The company is preparing for an IPO. It has Claude Code, Claude in Chrome, Claude in Excel, Claude in PowerPoint, Claude Opus 4.7 (just released, genuinely excellent, fixes many of the lazy-coding complaints), and somewhere in a locked server room, Claude Mythos Preview — the model that can break the internet if released, that found bugs hiding since 1999, that makes professional security researchers feel like their careers were a polite introduction to the real work.
Anthropic says they will release Mythos-class capabilities "safely, eventually, when the safeguards are ready." The safeguards are being tested on Opus 4.7 first — because Opus 4.7, while excellent, does not pose "the same level of risk as Mythos Preview." When the safeguards work on Opus 4.7, they will be applied to Mythos. Then the world's most capable hacking AI becomes the world's most capable hacking AI with guardrails.
We cannot wait. We are also terrified. Both simultaneously. This, it turns out, is the correct emotional response to 2026's AI industry.
— BreakingBakwas.com's server was scanned by Claude Mythos Preview as part of Project Glasswing. Seventeen vulnerabilities were found. Fourteen have been patched. Three are in a committee. The committee meets Thursdays. This article will be published before Thursday. Godspeed.
