AI infrastructure platforms are being weaponized within a single attacker shift of vulnerability disclosure, exposing a structural incompatibility between how organizations govern AI patching and the actual window available to them.
MCP's trust architecture makes any exposed management interface a pre-authenticated command shell by design, not by accident, and two RCE vulnerabilities in the same week reveal a deployment curve that has outrun both audit methodology and detection playbooks.
The same week Anthropic unveiled an AI that autonomously finds zero-days, its own CLI shipped a CVSS 9.8 command injection, exposed by a debugging artifact that had been sitting in an npm package since March 31.
From a six-month DPRK social engineering operation to mass exploitation of developer ecosystems, this week's threat landscape reveals that the most reliable attack surface is the trust we extend by default.
Five AI infrastructure disclosures in one day share the same root cause: the gap between what users believe their security settings do and what the framework actually executes.
Every major incident this week exploited institutional or interpersonal trust rather than technical vulnerabilities. The adversary's target is not the system. It is the relationship.