The world was watching Iran. The actual inflection point happened in a press release.
In April 2026, Anthropic announced Claude Mythos Preview — a frontier AI model so capable of identifying and exploiting software vulnerabilities that the company decided it was too dangerous to release publicly. It can autonomously find zero-day flaws across every major operating system and web browser. In internal tests, 99% of the vulnerabilities it discovered were unpatched. The UK’s AI Security Institute gave it expert-level hacking tasks and it succeeded 73% of the time.
Anthropic’s response was to create Project Glasswing: a controlled-access program giving Mythos to a select group of companies for “defensive purposes only.” Launch partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, NVIDIA, and Palo Alto Networks. Forty additional organizations were added. Anthropic committed $100M in usage credits across the effort.
Every single launch partner is either a US company or a company deeply embedded in US commercial and government infrastructure.
This is not a coincidence. This is a policy.
The “Defensive Use Only” Fiction#
Let’s be precise about what Mythos can do. It can autonomously identify previously unknown vulnerabilities, generate working exploits, and carry out complex cyber operations with minimal human input. It found critical flaws in every widely used operating system and browser — flaws that survived decades of human review and millions of automated security tests.
Anthropic says Glasswing is for defense. The NSA is using Mythos anyway.
Despite the Pentagon designating Anthropic as a “supply chain risk” in March 2026, and Trump ordering federal agencies to stop using its products, the NSA is reportedly running Mythos. Several other federal agencies — including Commerce’s CAISI — are circumventing the formal ban to test the model. The White House is simultaneously negotiating to give all federal agencies official access.
Let that sentence sit for a moment. An intelligence agency is using a tool that its own government officially banned, while the executive branch is quietly negotiating to un-ban it for national security purposes.
The defensive framing is doing a lot of work here. A tool that can find every zero-day in every major OS is not only a scanner. It is — by definition — also a targeting system. The line between “we found a vulnerability to patch it” and “we found a vulnerability to exploit it” is a policy decision, not a technical constraint. And those policy decisions are now being made unilaterally by a private US company and a US government that cannot agree with each other in public but are apparently aligned in private.
Who Gets Protected. Who Doesn’t.#
Here is the geopolitical math that the Atlantic Council, quite correctly, flagged as more significant than the Iran war.
Mythos knows where every critical vulnerability is. Project Glasswing tells you who gets to use that knowledge.
The UK got evaluation access — full hands-on review via the AI Security Institute. The special relationship, in AI form.
The EU has been denied access. Not delayed. Denied. Anthropic skipped a European Parliament hearing on Mythos’s cyber risks. OpenAI, facing the same pressure, moved to give European cybersecurity teams access to its own cyber model. Anthropic held out. The result: European banks, governments, and infrastructure operators are running systems with vulnerabilities that Mythos has already identified — and they’re not in the room.
China is explicitly shut out. Chinese entities have sought Glasswing access and been refused. This is the least surprising part of the story — and the most consequential. China has significant AI capability and is developing its own equivalent. When that model reaches parity with Mythos, the asymmetry inverts.
Japan is scrambling. Prime Minister Sanae Takaichi ordered an emergency cabinet-level cybersecurity review specifically citing Mythos. The message to Japan’s government was clear: a private American company has a model that can compromise your infrastructure, you have no access to it, and you need to figure that out on your own.
This is access-as-geopolitics. It is not subtle. The decisions Anthropic is making about who can use Mythos are the functional equivalent of an arms export policy — except there is no Arms Export Control Act governing it, no State Department license required, no congressional oversight, and no international treaty framework in play.
A private company incorporated in Delaware is deciding which nation-states get to defend their digital infrastructure.
The Weaponization Is Already Done#
The question “will the US use AI as a weapon?” has been answered. The next question is “against whom and when?”
Consider the position this creates. The US government has — through a mix of formal and informal channels — effective access to a tool that can identify exploitable vulnerabilities in the digital infrastructure of any country on earth. The offensive applications are not theoretical. They are the same capabilities that the defensive framing describes, run in the other direction.
Stuxnet was a cyberweapon that required years of development and targeted a single facility. Mythos can autonomously identify attack surfaces across global critical infrastructure. The delta between those two capabilities is not incremental. It is categorical.
The Trump administration’s behavior tells you what the posture actually is. Publicly: ban Anthropic as a supply chain risk. Privately: have the NSA run Mythos, negotiate White House access, and reconsider the relationship entirely once the national security implications became clear. This is not incoherence. This is a government that understood what it had and immediately moved to control it — not to constrain it, but to own it.
The Governance Gap Is Structural#
The Lawfare Institute’s framing of Mythos as exposing a “governance gap” is accurate but understates the problem. The gap is not just regulatory. It is architectural.
The existing international frameworks for controlling dangerous technologies — the Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, the Wassenaar Arrangement for export controls on dual-use technology — were all built after the technologies existed, after their destructive potential was demonstrated, and often after they were already used. AI is on the same trajectory, moving faster.
What makes Mythos different from, say, a previous-generation hacking tool is scale and autonomy. A team of skilled human hackers can compromise some systems. Mythos can find vulnerabilities in all systems, automatically, continuously, and at a pace that no human security team can match. The offense-defense balance in cyberspace has shifted permanently, and it has shifted toward whoever holds the best model.
Right now, that is the United States. But “right now” is doing enormous work in that sentence.
What Happens When China Catches Up#
CSO Online asked the obvious question: what happens when China’s AI catches up to Mythos?
The answer depends entirely on whether some international framework exists by then to constrain its use. Currently, no such framework exists. The trajectory of US behavior — racing to deploy Mythos capabilities across government while engaging in nominal safety theater — does not suggest that the US will be the party pushing for multilateral constraints.
There are reports that Mythos may be restarting US-China AI safety dialogue. That would be good. But AI safety dialogue between great powers typically produces the same outcome as nuclear non-proliferation diplomacy: agreements that constrain declared capabilities while both sides develop undeclared ones. The model weights are not in a silo in Nevada. They are parameters in a neural network that can be replicated by any sufficiently capitalized lab with sufficient compute.
The asymmetry the US currently holds is real, but it has a shelf life. The strategic window is probably measured in months, not years.
The Anthropic Paradox#
There is an uncomfortable irony sitting at the center of this story.
Anthropic was founded explicitly on the premise that advanced AI is dangerous and that building it carefully, with safety as a first principle, is the only responsible path. Constitutional AI, the Responsible Scaling Policy, the decision to restrict Mythos rather than release it publicly — these are genuine expressions of that philosophy.
And yet Anthropic has built the most capable offensive cyberweapon in existence, restricted access to US-aligned entities, quietly allowed the NSA to run it despite the official ban, and is now in negotiations to put it in the hands of the full US federal government.
You can believe that Anthropic made the least-bad choices available to it given the technology’s capabilities. You can also observe that the outcome — a US AI company holding a cyberweapon under informal US government control, with no international oversight, no treaty framework, and selective access based on geopolitical alignment — is precisely the outcome that a naive reading of “safety-focused AI development” was supposed to prevent.
Both things can be true simultaneously. They are.
What This Means for the Rest of the World#
If you are a software engineer, CTO, or policy maker outside the US-UK axis, the practical implications are:
Your infrastructure is exposed. Mythos has likely already catalogued vulnerabilities in the systems you run. You do not have access to the patch list. Whether those vulnerabilities get exploited depends on political decisions made in Washington, not technical decisions made by your security team.
“Too dangerous to deploy” means “deployed selectively.” Anthropic’s public position is that Mythos is too dangerous for general release. The actual release policy is: available to US companies, US government agencies, and US-aligned intelligence services. The danger is not being constrained. It is being channeled.
AI access is now diplomatic currency. OpenAI gave the EU its cyber model. Anthropic withheld Mythos from the EU. Japan is in emergency session over it. The decisions about which countries get access to frontier AI capabilities are being made by private US companies, with the same geopolitical consequences as arms sales — without any of the legal framework that governs arms sales.
The clock is running. Every month that passes without an international framework for governing dual-use AI capabilities is a month in which the US exploits its first-mover advantage and other powers race to close the gap. The window for establishing norms before the capabilities proliferate is narrow and closing.
The Atlantic Council is right. Mythos is more consequential than the Iran war. Wars end. This shift in the offense-defense balance in cyberspace is permanent, and the norms — or lack thereof — established in the next twelve months will define the terrain for decades.
The US just turned AI into a weapon. The question is what the rest of the world does about it.
Sources: Atlantic Council · Just Security · Axios / NSA · Axios / White House · Bloomberg · CNBC / EU · The Register / Japan · CSO Online · Anthropic Glasswing · Lawfare · Rest of World · Schneier on Security