The AI startup Anthropic has partnered with tech giants including Apple, Amazon, and Microsoft to deploy its unreleased Claude Mythos Preview model for defensive cybersecurity. This initiative aims to identify and patch critical software vulnerabilities that have remained hidden for decades across major operating systems. The move comes as the company faces a legal battle with the Pentagon over its refusal to allow AI use in autonomous weaponry.

Discovery of Legacy Vulnerabilities

Claude Mythos Preview identified a 27-year-old security flaw in OpenBSD and a 16-year-old bug in FFmpeg, demonstrating capabilities that outperform previous models in detecting complex exploits.

Pentagon Supply Chain Conflict

The U.S. Department of Defense designated Anthropic a supply chain risk in early 2026 after the firm refused to remove guardrails against mass surveillance, leading to a federal court injunction against the Pentagon.

Open-Source Security Funding

Anthropic is committing $100 million in credits and $4 million in donations to open-source security organizations to help maintain critical digital infrastructure.

Defensive Response to Cyber Threats

The project follows a February 2026 incident where a hacker allegedly used an earlier Claude version against Mexican government agencies, highlighting the need for controlled AI releases.

Anthropic announced Project Glasswing on April 7, 2026, a cybersecurity initiative that gives a select group of major technology companies access to its unreleased artificial intelligence model, Claude Mythos Preview, for defensive security work. The consortium includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks as inaugural partners. Anthropic stated that Claude Mythos Preview has already identified thousands of high-severity vulnerabilities across every major operating system and web browser. The company said it will not release the model to the general public, citing the risk that it could be used to launch cyberattacks rather than defend against them. Anthropic added that it is simultaneously in ongoing discussions with the United States government about the model's capabilities.

Decades-old bugs surface in OpenBSD and FFmpeg Among the most striking findings disclosed by Anthropic, Claude Mythos Preview uncovered a 27-year-old security flaw in OpenBSD, an operating system specifically regarded for its security reputation. The model also detected a 16-year-old vulnerability in FFmpeg, a video software library, in a line of code that had reportedly been passed over by automated tests more than five million times without detection. According to Gizmodo, Mythos Preview also identified a chain of vulnerabilities in Linux that could be exploited to achieve complete machine takeover. Anthropic described the model as capable of outperforming all but the most skilled human security researchers in finding software flaws. In benchmark testing, Mythos Preview consistently outperformed Claude Opus 4.6, including on the CyberGym benchmark, which measures how well AI agents can detect and reproduce real-world software vulnerabilities, according to Gizmodo.

27 (years) — Age of oldest vulnerability found by Claude Mythos Preview

Anthropic was founded in 2021 by former members of OpenAI, including siblings Dario Amodei and Daniela Amodei, who serve as chief executive officer and president respectively. The company develops the Claude series of large language models, which compete directly with OpenAI's ChatGPT. Anthropic previously disclosed that hackers exploited vulnerabilities in its Claude AI to attack approximately 30 global organizations. The 2026 RSA Conference on cybersecurity in San Francisco was dominated by discussions about the rise of AI-assisted cyberattacks and whether conventional security tools remain adequate to counter them.

Up to $100 million in credits pledged for open-source security Anthropic committed up to 100 (million USD) — usage credits pledged to Project Glasswing partners in usage credits and an additional four million dollars in direct donations to organizations working on open-source security. Beyond the inaugural launch partners, the company said it is extending access to approximately 40 additional organizations responsible for critical software infrastructure. Anthropic framed the initiative as a race against time, warning that the window between the discovery of a vulnerability and its active exploitation has narrowed sharply, with what previously required months now potentially achievable in minutes. The company stated its ultimate goal is to enable users to safely deploy models of the Mythos class at scale. A report by IBM and Palo Alto Networks cited in the Reuters article found that 67 percent of 1,000 surveyed executives said their organizations had been victims of AI-assisted attacks in the past year.

Usage credits for partners: 100, Donations to open-source security groups: 4

Pentagon dispute shadows Anthropic's security ambitions Project Glasswing's launch comes against the backdrop of a legal confrontation between Anthropic and the United States Department of Defense. Anthropic refused to remove guardrails on its AI services that prevent their use in autonomous weapons systems or for mass surveillance, according to reporting by Handelsblatt and Engadget. The Pentagon responded by designating Anthropic a "supply chain risk," a classification that would largely block the company from doing business with the US government. Anthropic filed legal action against the designation, and a federal judge issued a temporary injunction against the Department of Defense's ruling on March 26, 2026. The dispute has drawn attention to the broader tension between AI companies that impose ethical restrictions on their models and government agencies seeking unrestricted access to frontier AI capabilities. Gizmodo noted that the shift from treating Claude Mythos Preview as too dangerous to release publicly to deploying it across critical technology infrastructure represents a significant change in Anthropic's public posture toward the model.

Mentioned People

Sources: 19 articles