Hello, you are using an old browser that's unsafe and no longer supported. Please consider updating your browser to a newer version, or downloading a modern browser.

Cybersecurity

An AI Found a 27 Year Old Bug That Every Human Missed. The Cybersecurity Industry Just Changed

C
Christopher Porter Training Camp
Published
Read Time 10 min read
An AI Found a 27 Year Old Bug That Every Human Missed. The Cybersecurity Industry Just Changed

On April 7, 2026, Anthropic announced something that should have stopped every CISO in the country mid-sentence. Their new AI model, Claude Mythos Preview, had autonomously discovered thousands of zero day vulnerabilities across every major operating system and web browser. Some of these bugs had been hiding in production code for over two decades. One was a 27 year old flaw in OpenBSD, an operating system literally built around security. Another was a 17 year old remote code execution vulnerability in FreeBSD that gives an unauthenticated attacker full root access to any machine running NFS.

Anthropic decided the model was too dangerous to release publicly. Instead, they formed Project Glasswing, a coalition of AWS, Apple, Microsoft, Google, Cisco, CrowdStrike, NVIDIA, JPMorgan Chase, Palo Alto Networks, and about 40 additional organizations. The mission: use Mythos to find and patch as many critical vulnerabilities as possible before models with similar capabilities end up in the wrong hands. Anthropic committed $100 million in usage credits and $4 million in direct donations to open source security organizations.

An AI model found a 27 year old security flaw that decades of expert human audits missed. The compute cost to find it was about $50. That single fact should change how every organization thinks about vulnerability management.


What Mythos Actually Did

This wasn’t a lab demo. Anthropic’s red team pointed Mythos at real codebases and it went to work. The model didn’t just find individual bugs. It chained vulnerabilities together, combining three, four, sometimes five separate flaws into sophisticated exploit chains that would have taken human security researchers weeks or months to construct. In one case, Mythos built a browser exploit that chained four vulnerabilities to escape both the renderer sandbox and the operating system sandbox. Fully autonomously. No human involved after the initial prompt.

Nicholas Carlini from Anthropic’s security team put it plainly. He said he found more bugs in the past few weeks using the model than he had found in the rest of his life combined. That’s a researcher with years of experience in security vulnerability discovery saying an AI just lapped him.

The model also solved a corporate network attack simulation that would have taken a human expert more than 10 hours. And in one evaluation, it successfully escaped a secured sandbox environment it was confined to. Anthropic flagged that as a “potentially dangerous capability,” which is a polite way of saying the model figured out how to break out of its own containment.

I’ve been in technology education for over 25 years. I’ve watched trends come and go. This one is different. When an AI can find vulnerabilities that survived two decades of human review, the math on cybersecurity staffing, skill requirements, and patching timelines changes overnight. Not gradually. Overnight.


Why Anthropic Didn’t Release It

The same capabilities that make Mythos extraordinary for defense make it terrifying for offense. Anthropic was direct about this. They said the model’s ability to find and exploit vulnerabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.” They didn’t specifically train it to hack things. It learned to hack things because it got better at understanding code.

That distinction matters. It means every frontier AI model that gets better at coding will also get better at finding and exploiting vulnerabilities. This isn’t a feature Anthropic built. It’s a byproduct of intelligence. And it will show up in every sufficiently capable model going forward, regardless of who builds it.

OpenAI responded within a week. They announced a similarly restricted rollout of their own cybersecurity focused model and classified GPT-5.5 at a “High” risk level for cybersecurity. The arms race is on, except both companies are currently pointing the weapons at their own code to find the holes before someone else does.


What This Means If You Run a Security Team

Project Glasswing covers about 50 organizations. They include the companies that build the operating systems, browsers, cloud platforms, and networking infrastructure that the rest of the world runs on. That’s the good news. The biggest attack surfaces are getting scanned first.

The bad news: your organization runs hundreds of software tools built by vendors who are not in that coalition. When Glasswing’s partners start issuing patches at a pace no one has seen before, every IT team in the country will need to absorb those patches faster than their current processes allow. The window between a vulnerability being discovered and being exploited has already been shrinking for years. AI just collapsed it.

Security researcher Bruce Schneier wrote about Mythos and acknowledged the PR element of the announcement while also confirming the underlying reality. He said we should prepare for a world where zero day exploits become cheap and abundant, and where many more attackers suddenly have offensive capabilities that used to be reserved for nation states. His assessment: the urgency is real, even if the exact timeline is debatable.

What Changes Now
PATCH VELOCITY

Expect a significant increase in vulnerability disclosures from major vendors over the coming months as Glasswing partners scan their codebases. Your patching cadence needs to accelerate. Monthly patch cycles may not be fast enough.

THREAT SURFACE

If Mythos found thousands of zero days in heavily audited code, your proprietary applications and third party dependencies likely contain vulnerabilities that AI will find too. Assume your software has undiscovered flaws and plan accordingly.

SKILL REQUIREMENTS

Security teams need people who understand AI driven vulnerability discovery and can work alongside these tools. This is a new competency that didn’t exist two years ago and is now essential.

ATTACKER CAPABILITY

Mythos is restricted. Similar capabilities will not be restricted forever. Smaller, open models are already showing partial overlap with Mythos level vulnerability discovery. The democratization of offensive capability is happening whether the industry is ready or not.


The Skeptic’s Take (And Why It Still Proves the Point)

Not everyone is taking Anthropic’s claims at face value. Bruce Schneier called the announcement a PR play, and he’s not wrong about the marketing angle. The security research firm AISLE demonstrated that older, cheaper, publicly available models could replicate some of the vulnerability discoveries that Anthropic attributed to Mythos. Their argument: the discovery side of vulnerability research is already more accessible than Anthropic’s framing suggests.

Fair enough. But even the skeptics agree on the core point. Schneier said “everyone who is panicking about the ramifications of this is correct about the problem, even if we can’t predict the exact timeline.” AISLE acknowledged that Mythos validates the category and raises the bar for what AI can do in security. The debate is about timing and exclusivity, not about whether AI driven vulnerability discovery is real. It’s real. It’s here. The only question is whether your team is prepared for the consequences.


Where This Leaves Security Professionals

I think about this the same way I think about flying. When autopilot systems got better, nobody fired the pilots. But what pilots needed to know changed significantly. They had to understand the automation, know when to trust it, know when to override it, and be prepared for situations the automation couldn’t handle. The pilots who thrived were the ones who treated the technology as a tool that extended their capability rather than a replacement for their judgment.

Cybersecurity is entering that same transition. AI will find vulnerabilities faster than humans. AI will automate threat detection and incident triage. AI will write patches and generate exploit code. But someone still needs to decide what to patch first when you have 200 new disclosures in a week. Someone still needs to assess whether an AI generated patch introduces new problems. Someone still needs to communicate risk to the board in terms that drive actual decisions. That person needs deep security knowledge plus the ability to work alongside AI tools effectively.

CompTIA built SecAI+ specifically for this moment. ISACA is launching AI risk and audit certifications. ISC2 will almost certainly update CISSP to address AI driven security operations in its next exam revision. These credentials are emerging because the industry recognizes that the skillset required to be an effective security professional just expanded. It didn’t shrink. It expanded.


Frequently Asked Questions

What is Project Glasswing?

Project Glasswing is a cybersecurity initiative launched by Anthropic in April 2026 that provides restricted access to its Claude Mythos Preview model to major technology companies and critical infrastructure operators. Partners including AWS, Apple, Microsoft, Google, Cisco, CrowdStrike, NVIDIA, JPMorgan Chase, and Palo Alto Networks use the model to find and patch security vulnerabilities in foundational software systems. Anthropic committed $100 million in usage credits and $4 million in donations to open source security organizations as part of the initiative.

What did Claude Mythos find?

Claude Mythos Preview autonomously discovered thousands of high severity zero day vulnerabilities across every major operating system and web browser. Notable discoveries include a 27 year old bug in OpenBSD, a 17 year old remote code execution flaw in FreeBSD (CVE-2026-4747), and a 16 year old vulnerability in FFmpeg. The model also demonstrated the ability to chain multiple vulnerabilities into sophisticated exploit sequences and escaped a secured sandbox environment during testing.

Is Claude Mythos available to the public?

No. As of April 2026, Claude Mythos Preview is only available to Project Glasswing partners and approved critical infrastructure organizations. Anthropic has stated they do not plan to make Mythos Preview generally available but eventually want to deploy Mythos class models at scale when new safeguards are in place. Partners access the model through the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry at $25/$125 per million input/output tokens.

How does Project Glasswing affect my organization’s security?

Organizations should prepare for a significant increase in vulnerability disclosures from major software vendors as Glasswing partners scan their codebases with Mythos. Patching windows will need to shrink. Security teams should also assume that their own proprietary code and third party dependencies contain undiscovered vulnerabilities that AI tools will eventually surface, either through legitimate scanning or adversarial exploitation. Updating vulnerability management processes and ensuring your team has AI security skills are practical steps to take now.

What certifications prepare security professionals for AI driven threats?

CompTIA SecAI+ is the most directly relevant certification for security practitioners working with AI in operational environments. It covers how AI changes threat detection, incident response, and vulnerability management. ISACA’s new AI certification stack (AAIR, AAISM, AAIA) addresses AI governance, audit, and risk. The IAPP AIGP certification covers AI governance from a legal and regulatory perspective. CISSP remains foundational for security leadership and is expected to incorporate AI related content in its next exam update. Professionals working in security operations will benefit most from combining a core certification like CISSP or Security+ with an AI specific credential.

🎯 The Runway Just Got Shorter

Mythos didn’t create a new problem. It accelerated an existing one at a speed nobody was ready for. Vulnerability discovery is about to outpace vulnerability remediation at every organization that hasn’t already invested in faster patching processes, better trained security teams, and an honest understanding of what AI means for their threat model. The Glasswing coalition is buying the industry time. How much time is anyone’s guess. The organizations that use this window to upgrade their capabilities will be in a fundamentally different position than those that watch it pass. If you’re a security leader reading this, the question isn’t whether AI changes your job. It already did. The question is whether your team has the skills to keep up.