<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Prompt-Injection on Security Unlocked</title><link>https://securityunlocked.com/tags/prompt-injection/</link><description>Recent content in Prompt-Injection on Security Unlocked</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Mon, 11 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://securityunlocked.com/tags/prompt-injection/index.xml" rel="self" type="application/rss+xml"/><item><title>The Agent Trusts the Output</title><link>https://securityunlocked.com/weekly-intelligence/the-agent-trusts-the-output/</link><pubDate>Mon, 11 May 2026 00:00:00 +0000</pubDate><guid>https://securityunlocked.com/weekly-intelligence/the-agent-trusts-the-output/</guid><description>Eight AI agent frameworks disclosed the same class of remote code execution vulnerability in a single week because the entire ecosystem shares a cognitive failure: treating LLM output as trusted data rather than untrusted instructions.</description></item><item><title>The Mental Model Is the Vulnerability</title><link>https://securityunlocked.com/weekly-intelligence/the-mental-model-is-the-vulnerability/</link><pubDate>Fri, 27 Mar 2026 00:00:00 +0000</pubDate><guid>https://securityunlocked.com/weekly-intelligence/the-mental-model-is-the-vulnerability/</guid><description>Five AI infrastructure disclosures in one day share the same root cause: the gap between what users believe their security settings do and what the framework actually executes.</description></item></channel></rss>