On April 7, Anthropic announced Project Glasswing, a restricted-access program giving vetted enterprise partners: Microsoft, Google, Cisco, CrowdStrike, Apple, NVIDIA, JPMorgan Chase, and Palo Alto Networks, early access to Claude Mythos Preview. The model has autonomously discovered thousands of previously unknown vulnerabilities across major operating systems and browsers, including a 17-year-old remote code execution flaw in FreeBSD that grants unauthenticated root access over the network, and a 27-year-old bug in OpenBSD. Anthropic committed $100 million in usage credits to the program. In the announcement, the company acknowledged something worth reading twice: “We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.”
Six days earlier, a researcher noticed that Claude Code version 2.1.88 had shipped a 59.8 MB source map file as a debugging artifact in its npm package. The file mapped the minified production bundle back to its full TypeScript source. Any developer who had run npm install @anthropic-ai/claude-code got Anthropic’s complete CLI codebase alongside the binary. That source code contained the patterns that became CVE-2026-35022, CVE-2026-35021, and CVE-2026-35020. Researchers at Phoenix Security published a working exploit chain within two weeks of the leak.
The juxtaposition is worth sitting with. The company building an AI that autonomously discovers zero-days in 27-year-old operating system code shipped its own source to the public in a debugging artifact, and the bugs that source exposed scored CVSS 9.8.
What the Bugs Actually Do
All three CVEs share one root cause: unsanitized string interpolation into shell-evaluated execution. The surface is Claude Code’s authentication helper layer. The apiKeyHelper, awsAuthRefresh, awsCredentialExport, and gcpAuthRefresh configuration parameters in .claude/settings.json are passed directly to a shell using shell=True with no input validation. An attacker who controls any of those values can inject shell metacharacters and execute arbitrary commands with the privileges of the running process. CVE-2026-35020 reaches the same underlying pattern through the terminal launcher and TERMINAL environment variable. CVE-2026-35021 exploits the prompt editor invocation utility via malformed file paths.
Three CVEs, one fix, one failure mode: the code that handles authentication helper execution treats configuration values as trusted input.
This matters more than a typical CLI bug because of how developers run Claude Code. It is not a tool you open like a text editor. It runs in CI/CD pipelines. It executes with cloud credentials attached. In automated environments, it authenticates against AWS, GCP, and other services on a schedule. The awsAuthRefresh and awsCredentialExport helpers exist precisely because CI/CD systems need the CLI to refresh credentials without human interaction. A malicious .claude/settings.json planted by a poisoned npm package, a compromised repository, or a rogue open-source dependency executes attacker-controlled shell commands at the next authentication event, inside the same process that holds your cloud credentials. The bug does not need to be delivered over a network. A crafted workspace configuration file is sufficient.
The patch is available: Claude Code 2.1.92+ and Claude Agent SDK for Python 0.1.56+ address all three CVEs. The immediate action is to update and audit every apiKeyHelper value across developer endpoints and CI/CD environments.
The Structural Problem the Source Map Exposed
The source map leak is the part that deserves more attention than it has received. A source map is a development artifact, the file that lets you step through readable source code in a debugger even when production ships a minified bundle. It is included during development and excluded before publishing to production registries. That exclusion is a deliberate step. It failed here.
The result was that Anthropic shipped full TypeScript source to the npm registry by accident, and the chain from that accident to a public CVSS 9.8 exploit took less than two weeks. That timeline is not an indictment of the researchers who found the bugs: finding injection patterns in readable TypeScript is straightforward work. It is an indictment of a shipping process that did not catch a 59.8 MB artifact that had no business in a production package.
This is where the Glasswing contrast becomes analytically useful rather than just ironic. Mythos exists because Anthropic made a sustained, resourced investment in applying reasoning at machine speed to the problem of finding vulnerabilities. The Glasswing announcement shows what that investment looks like when it works. The source map failure shows what happens at the other end of the same organization, where a developer tooling product is shipping at consumer software velocity without the security checkpoints that would have caught the artifact. Both outcomes reflect real organizational priorities. They just happen to be pointed in opposite directions.
Anthropic is not uniquely culpable here. Every AI lab shipping agentic developer tooling is operating under the same pressure: release fast, iterate fast, capture developer adoption before the next competitor does. Claude Code, Cursor, GitHub Copilot, Windsurf, and their successors are all competing for the same developer workflows. The security review process for a product in that race is fighting against a clock that does not slow down for a source map audit.
Where This Goes
The Glasswing announcement closes with an explicit acknowledgment that the capability gradient runs both ways: a model good enough to find 17-year-old FreeBSD RCEs is, by definition, capable of writing working exploits. Anthropic’s decision to restrict access is the right call. Dark Reading’s coverage of the access control question frames it correctly: the real concern is not whether Anthropic can keep Mythos contained, but how long before a comparable model appears without those controls. Several frontier labs are working on analogous capabilities. The access restrictions Anthropic has imposed are a policy decision, not a technical constraint, and policy decisions do not survive competitive proliferation intact.
The near-term implications are narrower and more actionable. The CVE-2026-35022 watch item from the external intelligence report for this week asks a specific question: are other AI coding assistant CLIs using analogous shell=True authentication patterns? The answer is probably yes. Claude Code is not the only agentic CLI that handles cloud credential refresh. Any tool that executes authentication helpers via subprocess and consumes configuration values from workspace files should be audited for the same class of bug. That work should happen before someone else publishes the source map.
For security teams, the practical consequence is simpler than it sounds. AI coding assistants are not just productivity tools. They run with privileged credentials, execute in CI/CD, and read configuration from repositories. The trust model developers apply to them, which is roughly “this is a tool from a company I have heard of, so I will run it with my AWS credentials,” is not calibrated to the attack surface. CVE-2026-35022 is a reminder that the pipeline trusting the assistant is the attack surface.
Audit the .claude/settings.json files in your CI/CD environments. Check what runs in your auth helper fields. Verify that your packaging pipelines exclude development artifacts before publishing. These are not novel security controls. They are the controls that should have been in place before the source map went out.
Security Unlocked publishes weekly threat intelligence and strategic analysis. This post is based on intelligence collected April 6-12, 2026.
Security