Project Glasswing and Claude Mythos Changed the Patching Math. Here's What You Do Now.

Anthropic's Mythos AI can find and exploit zero-day vulnerabilities autonomously overnight. Here's what the N-day window collapse means for your patch cadence and vulnerability program.

The announcement came on a Tuesday morning. Anthropic published the technical details of a new model called Claude Mythos Preview alongside the launch of Project Glasswing—an emergency coalition of AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Twelve organizations, many of them direct competitors, joining the same defensive initiative on the same day tells you something about what the research showed.

What the research showed: Mythos Preview found and exploited a 27-year-old vulnerability in OpenBSD, a 16-year-old bug in FFmpeg, and a 17-year-old remote code execution flaw in FreeBSD—fully autonomously, without human steering after an initial prompt. It turned a CVE number and a git commit hash into a working privilege escalation exploit in under half a day for under $1,000 at API pricing. It achieved full exploit chains on every major web browser. The vulnerabilities it identified had survived decades of manual review and, in one case, five million automated fuzzing passes without detection.

Anthropic does not plan to release Mythos Preview to the general public. But the capabilities it demonstrated will proliferate. Competitor labs are building toward the same frontier, and some may not apply the same brakes. The window between that moment and now is exactly what Project Glasswing is trying to use productively.

The question for your security team is not whether this changes things. It already has. The question is whether your vulnerability program is built for the speed this environment now demands.

The N-day window has collapsed

For the past decade, the patching window for most organizations operated on a rough assumption: once a vulnerability is publicly disclosed, you have days to weeks before an attacker can weaponize it. That window gave security teams time to triage, test, schedule maintenance windows, and deploy fixes in a reasonably controlled way.

Mythos Preview eliminates that assumption. In Anthropic's own testing, the model turned a list of CVE identifiers for known Linux kernel vulnerabilities filed in 2024 and 2025 into working exploit code. The process—which would have taken a skilled penetration tester days to weeks per bug—now happens autonomously in hours for under $2,000 per exploit. More than half of the attempts succeeded.

As CrowdStrike's CTO Elia Zaitsev put it in the Glasswing announcement: "The window between a vulnerability being discovered and being exploited by an adversary has collapsed—what once took months now happens in minutes with AI."

That is not hyperbole. It is Anthropic's own published benchmark data, validated by a coalition that includes the world's largest cloud providers, cybersecurity platforms, and financial institutions. When organizations of that caliber treat something as an emergency, the framing is worth taking seriously.

What this means for your current patch backlog

Most vulnerability programs carry a backlog. Findings come in faster than they get fixed, especially at the medium and high severity tiers. That backlog has always represented risk. The Mythos announcement quantifies exactly how much that risk has grown.

Every unpatched N-day in your environment—any publicly disclosed vulnerability with a CVE and an available patch that has not been deployed—is now a potential target for AI-assisted exploit development measured in hours, not weeks. The order of magnitude difference is what matters. A one-week patching window was defensible when exploit development took two weeks. It is much harder to defend when that same process runs overnight.

Forrester analysts reviewing the Glasswing announcement were blunt about the insurance implications: they expect insurers to introduce exclusions that explicitly target AI-discovered vulnerabilities that are not remediated within defined timeframes. "When they do incorporate Mythos verification into insureds' control profiles, repricing will be abrupt, not gradual."

Regulators are similarly recalibrating. The EU AI Act, NIST AI RMF, and SEC cyber rules were written before autonomous zero-day discovery at this scale existed publicly. Mythos effectively resets what "reasonable care" means for vulnerability management. For SOC 2, PCI DSS, and similar compliance frameworks, auditors asking about your patching cadence will increasingly have a clearer baseline to compare against.

The specific things Anthropic told defenders to do

Anthropic published a detailed "Suggestions for defenders today" section alongside the Mythos technical analysis. It deserves to be read in full, but the patch-specific guidance is direct:

"This means that software users and administrators will need to drive down the time-to-deploy for security updates, including by tightening the patching enforcement window, enabling auto-update wherever possible, and treating dependency bumps that carry CVE fixes as urgent, rather than routine maintenance."

Three things in that sentence matter for your operations:

Tighten the enforcement window. If your current policy gives teams 30 days for high-severity findings and 90 days for mediums, you need to reassess whether those windows still reflect the actual exploitation timeline. The guidance from Anthropic is explicit: these windows need to come down.

Enable auto-update wherever possible. Patch deployment is often delayed not because security teams are slow to identify the need but because change control processes treat all updates the same way. Security patches that carry CVE fixes are not routine maintenance. They need a faster track.

Treat CVE-carrying dependency bumps as urgent. This is aimed directly at development teams who are managing third-party libraries and packages. A version bump that patches a known vulnerability is not optional housekeeping. It is a time-sensitive remediation.

Anthropic also noted that software distributors will need to ship patches faster, and that "out-of-band releases" may need to become standard for significant vulnerabilities rather than reserved for in-the-wild exploits. This is a structural change to the patching process, not a one-time adjustment.

Why your current vulnerability tracking probably is not built for this

Most vulnerability programs fail under acceleration not because of the security team's effort but because of how findings are managed between discovery and closure.

The common failure pattern: a Tenable or Qualys scan produces findings. Those findings get exported to a spreadsheet or ticketing system. Someone manually triages them, roughly by CVSS score. High-severity items get tracked. Medium and low items accumulate. There is no automated assignment, no SLA enforcement, no verification that fixes were actually applied, and no audit trail that proves closure.

That process might have been tolerable when the exploitation window was measured in months. When it is measured in hours, the gaps in that process—manual triage, unassigned findings, no SLA tracking, no rescan-based closure verification—are exactly the gaps that get exploited.

There is also a prioritization problem that the Mythos research highlights directly. A CVSS score alone does not tell you whether a vulnerability is being actively exploited right now. A 7.1-rated finding that appears on CISA's Known Exploited Vulnerabilities list is more urgent than a 9.8-rated finding with no active exploit and no internet-exposed attack surface. Humans cannot efficiently make that calculation at scale across hundreds of findings per scan cycle. AI-driven prioritization can.

Where Scan Ninja AI fits into this response

Project Glasswing's core thesis is that defenders need to adopt AI-powered tooling now—not to wait for Mythos-class models to arrive for defenders, but to build the operational muscle and workflows while the window still exists. Anthropic explicitly called this out: "We believe that starting early, such as by designing the appropriate scaffolds and procedures with current models, will be valuable preparation."

For most security teams that translates to one concrete operational question: do you have a vulnerability program that can actually run at the speed this environment now requires?

Scan Ninja AI is built for exactly this operational gap. When Tenable findings come in, they are automatically imported, deduplicated, and enriched with CISA KEV data, active exploit availability, and asset criticality context. Every finding is scored by actual exploitability and business impact—not CVSS alone—and assigned to an owner with an enforceable SLA. Critical vulnerabilities: 7-day window. High: 30 days. Medium: 90 days. The timer starts the moment the finding is assigned, not when someone eventually notices it.

Closure is verified through rescan, not self-certification. Every remediation generates a timestamped record—when it was found, when it was assigned, when it was fixed, and proof that the fix was confirmed. That audit trail is what SOC 2 auditors reviewing CC7.1 and CC7.2 controls want to see. It is also what cyber insurance underwriters will increasingly require as they adjust policies in the post-Mythos environment.

Dark web monitoring runs in parallel. If credentials associated with your organization's domains appear in breach data—the kind of credential theft that often accompanies the lateral movement phase of an advanced attack—that surfaces in the same workflow rather than in a separate tool that nobody checks.

What to actually do this week

The Glasswing announcement is worth treating as a forcing function for an honest audit of your current program. Here is a concrete starting point:

Look at your current unpatched critical and high-severity findings. For each one, how long has it been open? If any critical finding is older than seven days or any high finding is older than 30 days, you have specific gaps to close—not in theory, but in the findings you already know about.

Check whether any open findings appear on CISA's KEV list. These are vulnerabilities with confirmed active exploitation. If you have known exploited vulnerabilities sitting unpatched, that is first priority regardless of CVSS score.

Audit your closure process. For any finding that has been marked as resolved in your tracking system in the past 90 days, is there verification that the fix was actually applied? A ticket marked closed is not the same as a rescan confirming remediation.

Assess your dependency update cadence.  If your development teams are sitting on third-party library updates that carry CVE fixes because "it's not the right sprint", that process needs a different policy. CVE-carrying updates are security patches, not feature work.

None of this requires Mythos-class AI tools. It requires operational discipline applied to the vulnerability data you are probably already collecting. The defense gap that Glasswing is trying to close is not primarily a discovery gap—it is a remediation and speed gap.

The longer view

Anthropic's conclusion in their technical writeup is worth quoting directly:

"In the long run, we expect that defense capabilities will dominate: that the world will emerge more secure, with software better hardened—in large part by code written by these models. But the transitional period will be fraught. We therefore need to begin taking action now."

Their expectation is that AI ultimately benefits defenders more than attackers—once the tools are in the hands of teams with the operational discipline to use them. The uncertainty is the transitional period: the time between AI-assisted exploit development becoming cheap and AI-assisted defense becoming standard.

We are in that transitional period now. The organizations that come through it in better shape will be the ones that used it to build actual remediation workflows—not just better scanners, but better processes for acting on what scanners find.

If your current program can tell you, right now, which critical vulnerabilities you have open, who owns each one, when they were assigned, and whether your SLA targets are being met—you are better positioned than most. If it cannot, that is the gap to close first.

Build the vulnerability program this environment requires

Scan Ninja AI connects to your existing Tenable scans, enriches findings with CISA KEV and exploitability data, enforces SLA timers, and generates timestamped closure evidence—so your remediation cadence matches the speed the threat landscape now demands.

Or Request Demo