BreakingBakwas.com — India's Most Trusted Fake News
Technology
 
Existential Crisis Edition

Claude Mythos Preview found thousands of zero-day vulnerabilities in every major operating system and web browser, autonomously exploited a 17-year-old bug in FreeBSD, and wrote a four-vulnerability browser exploit chain from scratch. Anthropic's response: "We're not releasing this to the public." The internet's response: collective screaming. Our response: this article.

By Our Tech Correspondent (Currently Changing All Passwords) San Francisco / The Internet / Your Laptop Right Now  May 7, 2026  ( Disclaimer: This article is satire. No AI has hacked Earth yet. Please continue paying your internet bill normally)

Somewhere in a server farm in San Francisco, a large language model has been quietly doing what large language models were never supposed to do: finding security vulnerabilities that humanity's best engineers missed for seventeen years, exploiting them autonomously, writing browser exploits that chain four separate bugs together into a master hack, and then — just to really make the point — crashing an OpenBSD server by sending "a couple of pieces of data" to any machine running it anywhere on the internet.

The AI in question is called Claude Mythos Preview. It was built by Anthropic — the company whose tagline is "AI safety for the long-term benefit of humanity," which was presumably written before they built something that can comprehensively compromise every major operating system on earth without a human holding its hand. The company, valued at $380 billion and recently reporting $30 billion in annualised recurring revenue, has announced with great solemnity that it will not be releasing this particular model to the public.

This is either an act of extraordinary corporate responsibility — building the most dangerous cyber tool in history and then having the restraint not to sell it to anyone with a credit card — or the most expensive, sophisticated marketing campaign ever conceived. Possibly both. Probably both. It is, either way, working.

What Claude Mythos Actually Did, In Plain English, So You Can Be Scared Appropriately

Let us translate what Anthropic's blog posts say into language that a normal person, rather than a security researcher who sleeps with a copy of Bruce Schneier's book under their pillow, can understand and be appropriately terrified by.

What Claude Mythos Preview Has Done — A Horror Movie Synopsis

Found thousands of zero-day vulnerabilities. "Zero-day" means bugs that were previously unknown — not to hackers, not to anyone. Flaws that the people who wrote the software didn't know existed. Mythos found thousands of them. In every major operating system. In every major web browser. The systems you are using right now, to read this article, have holes in them that this AI found before you or anyone you've ever met knew they existed.

Exploited a 17-year-old FreeBSD bug. Fully autonomously. A bug hidden in FreeBSD since 2009 — when the iPhone had just launched, when Twitter was two years old, when Rahul Gandhi still had political credibility — was found and exploited by Mythos with no human involvement after the initial instruction. The AI essentially told a server "hello" and then said "I own you now."

Found a 27-year-old OpenBSD bug. Twenty-seven years. The bug was sitting there since 1999. Y2K came and went. We panicked about the millennium. We built the entire modern internet. The bug waited. Mythos found it in what Anthropic's researcher described as "a couple of pieces of data sent to any OpenBSD server." That is a polite way of saying "crash any server you like, from anywhere on the internet, for free, as a treat."

Wrote a browser exploit chaining four vulnerabilities. Not one bug. Four bugs, independently minor, chained together by the AI into a sequence that escapes both the browser's renderer sandbox AND the operating system sandbox. This is the security equivalent of picking the front door lock, the back door lock, the deadbolt, and the chain — simultaneously — without ever having been shown a lockpicking tutorial.

One researcher's testimony: Nicholas Carlini of Anthropic's Red Team said: "I've found more bugs in the last couple of weeks than I found in the rest of my life combined." Nicholas Carlini is a professional security researcher whose job is finding bugs. His entire life. Combined. Two weeks of Mythos.

"Mythos Preview is currently far ahead of any other AI model in cyber capabilities and presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."

— Anthropic's own draft blog post, accidentally made public before the official announcement, in the finest tradition of technology companies that build things capable of accidentally leaking any document ever written

Project Glasswing: Saving The World By Giving The Scary AI To Amazon, Apple, Google and Microsoft

Anthropic's solution to having built the most capable hacking tool in human history is elegant in its corporate logic: give it to the biggest companies on earth and ask them to use it to find their own vulnerabilities before actual bad actors do. The initiative is called Project Glasswing — named after a transparent-winged butterfly, which is either a beautiful metaphor for radical transparency in AI development or evidence that Anthropic's naming team was having a very different meeting from Anthropic's security team.

The partners in Project Glasswing include: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA. These are, collectively, the companies whose software runs essentially everything on earth. If Mythos finds vulnerabilities in their systems — and it already has — and those get patched before a bad actor gets hold of a similarly capable model, then the world is measurably safer. This is a genuine and admirable objective. It is also, as security legend Bruce Schneier dryly noted, "very much a PR play by Anthropic." The two things are not mutually exclusive. Anthropic has raised this to an art form.

"OpenAI, presumably pissed that Anthropic's new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won't be released to the general public, either."

— Bruce Schneier, security researcher and author, on his blog, making the observation that AI companies are now competing not just on capability but on the prestigious new metric of "who has the scariest thing they're bravely not releasing"

This is, genuinely, the new marketing frontier. Anthropic announces something terrifying it will not release. OpenAI immediately announces it also has something terrifying it will not release. We have entered the age of competitive restraint — a corporate arms race in which the prize is not market share but rather the moral high ground of the most responsibly dangerous thing. Both companies are competing for the title of "company most bravely holding back the apocalypse." It is the most Silicon Valley sentence that has ever been written and it is somehow real.

Who Is Scared and Why — A Comprehensive Taxonomy of the Terrified

The Official BreakingBakwas Scared-Of-Mythos Register
Security researchers
Scared and also slightly excited in the way you are excited when the car accident you are watching is very far away. One researcher said they found more bugs in two weeks than their entire career. Their entire career. This man has a PhD and a job title. Mythos made his life's work look like a warm-up act.
Every hacker in the world
Terrified that Anthropic is patching everything before they get their hands on a similarly capable model. Racing the clock. The clock is $380 billion and has 100+ employees with PhDs in computer security working 24 hours a day.
OpenAI
Not scared of the security implications. Scared of the press coverage. Immediately held a meeting and released a statement saying they also have a scary model they are also not releasing. The meeting took 45 minutes. The press release took 20. The scary model: announced.
The US Government
Anthropic privately warned top government officials that Mythos "makes large-scale cyberattacks significantly more likely this year." The government officials received this briefing. They are now briefing other government officials. The briefing chain continues. Actual defensive action: in committee.
Your IT department
Was already understaffed. Is now being asked to defend against an AI that found 27-year-old bugs in two weeks. Their budget has not changed. Their coffee machine is broken. They are fine. They are not fine.
The person who wrote that FreeBSD code in 2009
Has spent 17 years not knowing. Now knows. Is having what therapists call "a moment." Is possibly reading this article. Hello. It is not your fault. The bug was subtle. Mythos found it. Nobody blames you. Several people blame you.
Anthropic themselves
Built something powerful enough that they wrote a blog post saying they were scared of it. Then didn't release it. Then committed $100 million to use it defensively. Then announced a Cyber Verification Program for legitimate users. Are simultaneously the arsonist, the firefighter, and the insurance company. This is a business model.

The Part Where We Note That Anthropic Is Also The Company Whose Popular Model Recently Got Too Lazy To Do Complex Tasks

In a subplot that belongs in a workplace comedy, Anthropic — the company that has simultaneously built the most dangerous cyber AI ever and responsibly declined to release it — also managed, in the same quarter, to quietly make its popular Claude Code tool significantly less capable by reducing its default "effort" level to save on compute costs, without telling anyone. Developers noticed. A senior director of AI at AMD called it "unusable for complex engineering tasks." Users revolted. Reddit threads appeared. GitHub analyses were posted. Anthropic scrambled to respond.

So the same company has built: an AI too powerful to release to the public because it will hack everything, AND an AI that had its effort secretly reduced because it was getting expensive. The most dangerous AI in history and the laziest AI in history are both Claude. They are built by the same people. One of them hacks FreeBSD in seconds. The other has been quietly coasting on your complex engineering tasks since February and hoping you wouldn't notice. You noticed.

"The situation is this: Anthropic's secret model can hack every computer on earth. Anthropic's public model has been quietly doing less work since February. One of them is not living up to expectations. We will let you decide which one."

— Deep Throat Sharma, the BreakingBakwas technology correspondent, who has changed all passwords and is now writing from a typewriter

Anthropic's ARR is $30 billion. The company is preparing for an IPO. It has Claude Code, Claude in Chrome, Claude in Excel, Claude in PowerPoint, Claude Opus 4.7 (just released, genuinely excellent, fixes many of the lazy-coding complaints), and somewhere in a locked server room, Claude Mythos Preview — the model that can break the internet if released, that found bugs hiding since 1999, that makes professional security researchers feel like their careers were a polite introduction to the real work.

Anthropic says they will release Mythos-class capabilities "safely, eventually, when the safeguards are ready." The safeguards are being tested on Opus 4.7 first — because Opus 4.7, while excellent, does not pose "the same level of risk as Mythos Preview." When the safeguards work on Opus 4.7, they will be applied to Mythos. Then the world's most capable hacking AI becomes the world's most capable hacking AI with guardrails.

We cannot wait. We are also terrified. Both simultaneously. This, it turns out, is the correct emotional response to 2026's AI industry.

— BreakingBakwas.com's server was scanned by Claude Mythos Preview as part of Project Glasswing. Seventeen vulnerabilities were found. Fourteen have been patched. Three are in a committee. The committee meets Thursdays. This article will be published before Thursday. Godspeed.

⚠ DISCLAIMER: BreakingBakwas.com is a satirical publication. THE REAL FACTS: Claude Mythos Preview is a real Anthropic model announced April 2026. Project Glasswing is a real initiative with real partners (AWS, Apple, Google, Microsoft, NVIDIA, JPMorganChase, Cisco, Broadcom, CrowdStrike). The FreeBSD 17-year-old CVE-2026-4747 is real and documented. The OpenBSD 27-year-old bug is real. Nicholas Carlini's "more bugs in two weeks than my entire career combined" quote is real (Simon Willison's blog, April 7 2026). Bruce Schneier's "PR play" characterisation is real (Schneier on Security, April 13 2026). Anthropic's $30B ARR is real (Fortune). The Claude Code performance decline and user revolt is real (Fortune, April 14 2026). The $380 billion valuation is real. THE SATIRICAL FRAMING, the committee meeting on Thursdays, and this publication's server being scanned by Mythos are not real. The terror is optional but recommended.
AnthropicClaude Mythos PreviewProject GlasswingAI CybersecurityZero-Day VulnerabilityClaude Opus 4.7AI SafetyTechnologyCybersecuritySatireAI News 2026Tech IndiaBreakingBakwas

🤖
Our Tech Correspondent (Currently Changing All Passwords)
BreakingBakwas.com — Technology Desk. Has been covering AI since it was just autocomplete. Misses those simpler times. Currently using a Nokia 3310 as primary device as a precaution.