Quick Answer. Large language models and agent-mediated tooling collapsed per-incident exfiltration cost to a prompt while defender cost-per-prevented-event has not moved. Six weeks of threat data document five exfiltration channels (AI proxies, agent platforms, supply chain credential harvesting, RAG knowledge base poisoning, and communication SDK compromise) that legacy content-inspection DLP cannot see. DLP programs that have not restructured around context-aware policy are now structurally underwater.

The economic case for data loss prevention rested on a simple ratio. Attacker cost per exfiltration event was meaningful: staging, encoding, channel selection, often weeks of preparation against a hardened target. Defender cost per prevented event was meaningful too, but it was paid against a finite stream of inspectable events at known chokepoints. As long as both sides of the ratio held, the program made sense.

Six weeks of threat intelligence data ending this week show that ratio fully inverted. Large language models and agent-mediated tooling collapsed attacker cost per exfiltration event to a prompt and a paste. Defender cost has not moved. The inspection points DLP was designed to cover, file uploads, email attachments, web form submissions, are no longer where the data is actually leaving the organization. DLP programs that have not restructured their architecture are now structurally underwater, and the pipeline evidence is what makes the claim concrete rather than speculative.

The Cost Curve Inverted

Pre-LLM, an exfiltration operation against a hardened enterprise required real attacker investment. The work was not just technical, it was operational: select a channel that would not trigger inspection, encode the data so it would not match content signatures, stage the exfiltration in volumes small enough to avoid behavioral alerts, and maintain the channel long enough to extract the payload. The literature on advanced persistent threat operations is largely a literature on this cost structure. Mandiant’s reports, the Verizon DBIR, and decade-long retrospectives on named threat actor groups all describe campaigns where the exfiltration phase was a meaningful fraction of the attacker’s total time on target.

The post-LLM cost structure does not look like that. A compromised AI proxy, a compromised agent framework, or a poisoned RAG knowledge base produces exfiltration as a side effect of normal AI workload patterns. The attacker does not need to encode, segment, or schedule. The data leaves through the same channel the AI tool was authorized to use. The marginal cost to the attacker, once the foothold exists, approaches zero.

Defender cost on the prevent side has not fallen proportionally. Legacy DLP programs spend between sixty and eighty percent of analyst hours on false positives, a figure that has been stable across every survey of practitioner workload since the category emerged. The mitigations that DLP vendors have shipped over the last three years (better content classifiers, tighter regex libraries, more granular policy engines) are improvements at the margins of an architecture designed for a different threat surface. They reduce the false-positive rate on inspectable content. They do not extend visibility into uninspectable channels.

When attacker cost falls by orders of magnitude and defender cost stays flat, the ROI of the program inverts. That is the underwater condition. It is not a matter of better tuning or more aggressive policy. It is an architectural mismatch between where the data is moving and where the controls are looking.

Five Channels Your DLP Cannot See

The evidence for the cost inversion is not theoretical. Across the six weeks ending May 11, the threat intelligence pipeline that supports this site logged five structurally distinct exfiltration channels, all active in production environments, all invisible to content-inspection DLP, all linked to AI tooling or supply chain trust failures.

AI proxy layer. The Hacker News documented in early April that LiteLLM, the open-source proxy that sits between enterprise applications and upstream LLM APIs, had turned developer machines into credential vaults. Five weeks later, on May 8, CISA added CVE-2026-42208, a CVSS 9.8 SQL injection vulnerability in BerriAI’s LiteLLM, to the Known Exploited Vulnerabilities catalog with confirmed active exploitation against U.S. financial services and healthcare. This is the first AI proxy infrastructure to appear in the KEV catalog. LiteLLM does what its category does: it brokers queries between business applications and language models, and in doing so it accumulates conversation data, API keys, and routing configuration. A SQL injection against that layer provides direct query access to all of it. Traditional DLP, watching for file uploads at the egress gateway, sees none of this. The data leaves through a SQL response.

Agent platform RCE. Flowise, an AI agent builder with over 12,000 internet-exposed instances, was placed under active CVSS 10.0 RCE exploitation in early April. The same week, security researchers and the broader pipeline data identified that this was not an isolated event. By W20, eight separate AI agent frameworks had disclosed remote code execution vulnerabilities tracing to the same root cause: agent-controlled inputs reaching shell execution, eval(), or privileged filesystem operations without sanitization. Microsoft’s Semantic Kernel, LangChain, Gemini CLI, Paperclip, PPTAgent, and Open WebUI all landed in the same fourteen-day window. When an agent is compromised, any data the agent had access to is now exfiltrable through the agent’s own authorized network egress. There is no content boundary for DLP to inspect because the agent’s normal operation includes both data access and outbound network traffic.

Supply chain credential harvest. Five concurrent supply chain operations were active in W20, all targeting developer trust infrastructure. The TeamPCP campaign cluster, now in its sixth consecutive week, has moved from Trivy in mid-March (covered in Defenders Under Siege) through Checkmarx KICS and the Bitwarden CLI, LiteLLM, the Telnyx Python SDK, and the DAEMON Tools commercial installer. The North Korea-attributed Contagious Interview campaign published over 1,700 malicious packages across five open-source ecosystems simultaneously. The Axios npm compromise pulled credentials from every CI/CD pipeline that built downstream. Credentials harvested through these channels never touch user keystrokes and never traverse an inspectable content channel. The user never copies the credential out of a document; the build system reads it from environment, the malicious package exfiltrates it to attacker infrastructure, and the file gateway sees nothing.

RAG knowledge base poisoning. Open WebUI disclosed CVE-2026-44554 in W20, a vulnerability that allows unauthenticated overwrite of the retrieval-augmented generation knowledge base. RAG poisoning is a category that has no analog in classic DLP threat models. Legacy DLP asks whether sensitive data is leaving the organization; RAG poisoning operates by altering what the organization’s own AI systems retrieve and present back to internal users. The exfiltration is indirect: poisoned knowledge bases can be primed to return sensitive content as part of model output to external queries, can leak internal data through prompt-injected response patterns, and can be used to manipulate downstream decisions in ways that the affected organization cannot easily trace. There is no payload to inspect because the data is leaving through generated output, not file transfer.

Communication SDK compromise. The Telnyx Python SDK, compromised by TeamPCP on April 8, gives the attacker authenticated access to whatever phone and messaging data the SDK was authorized to handle. Communication metadata, message contents, and routing information leave through the same legitimate APIs the application was built around. Traditional DLP, optimized for keyword-based inspection of outbound email and file transfers, has no signature for this channel.

Each of these channels breaks a different DLP assumption. They share the property that none of them are user-keystroke events. None can be inspected at the file or email gateway. The traditional DLP control points are not where the data is moving.

The Defender Math

The economic ratio defenders need to invert is prevent-cost divided by attack-cost. Pre-LLM, that ratio was poor but the program was still defensible because the prevent layer caught enough of the inspectable channels to justify the operational tax. Post-LLM, the ratio is fully underwater because the attack cost has collapsed against channels the prevent layer cannot see.

There are two ways to invert the ratio back above water. The first is to make attack cost expensive again. That is what watermarking and provenance metadata are about: not preventing the exfiltration, but raising attacker payoff risk by making stolen data traceable to its breach source. Watermarking is the first new DLP control category in a decade that imposes cost on the attacker rather than just on the defender. The deterrence economics work when the watermark survives reasonable transformations including paste-through-LLM, which the academic literature on cryptographic and steganographic watermarking is now demonstrating at production scale.

The second way is to make defender cost-per-prevented-event fall faster than attacker cost-per-event has fallen. That is the architectural shift the DLP category is currently mid-flight: from content-inspection chokepoints to context-aware policy that follows data through transformations. Several concrete moves are visible in vendor roadmaps and acquisition activity.

Embedding-based detection replaces exact-data-match (EDM) and indexed-data-match (IDM) with vector similarity to fingerprinted corpora. Instead of asking “does this string match our customer list,” the system asks “does this content resemble our customer list, source code, M&A workpapers, or HR file.” This is the first credible mechanism for cutting the false-positive tax that has killed every legacy DLP rollout in living memory. Prompt-channel inspection extends visibility into the user-to-LLM and application-to-LLM channels that current gateway architectures cannot see. Browser-layer and endpoint-layer prompt inspection becomes the only path to detection at the moment of exfiltration when the data is leaving through a prompt rather than a file transfer. Agent-aware policy treats the AI agent as a new principal type with its own action space, applying policy to the agent’s planned tool calls rather than to the human user’s keystrokes. This is where the next two years of attack surface lives and is also where the defender side has the longest cost-reduction runway.

The Market Consolidation This Forces

Standalone DLP as a vendor category does not survive this transition intact. The data flow has moved to the browser and the prompt. The standalone DLP box, watching the file gateway and the email egress, has lost the structural position from which the category was built. What replaces it is a policy plane spanning the security service edge, the enterprise browser, data security posture management, and insider risk management. The vendors who currently dominate standalone DLP have two paths: reinvent around the AI interface and absorb the new visibility layers, or get eaten by enterprise browser and SSE platforms that already sit closer to where the data is moving.

The acquisition activity over the past year is consistent with this thesis. Browser-native security vendors and SSE platforms are buying or building the visibility layer that standalone DLP cannot extend into. DSPM vendors are absorbing the data-classification layer that was historically the DLP product’s core IP. Insider risk management is converging with DLP into a single behavior-and-content policy plane. The CISO making a 2026 DLP purchasing decision is implicitly making an architectural bet about which of these layers will own the policy plane in 2028. The conservative bet is to extend the current DLP relationship with content-classifier upgrades and prompt-inspection bolt-ons. The aggressive bet is to treat the underwater condition as a forcing function and replatform on the visibility layer that actually sees the new exfiltration channels.

The Question for CISOs

The strategic question is not whether the cost curve has inverted. The pipeline data answers that. The question is how long the program can run underwater before the cost compounds beyond what tactical mitigations can absorb. Each week the program stays in the legacy architecture is a week of exfiltration events through channels the program cannot see. The compounding is happening now, not in some hypothetical future.

Three years of CISO-side DLP procurement decisions are about to be measured against the architecture they bet on. The CISOs who recognize the cost inversion early and replatform are the ones whose programs survive the next budget cycle with their credibility intact. The CISOs who wait for vendor maturity will be replatforming under incident pressure rather than strategy, which is more expensive in every way that matters.

The data is in the pipeline. The mechanism is in the economics. The decision is in the RFP.