Home / Companies / WorkOS / Blog / Post Details
Content Deep Dive

Securing agentic apps: How to contain AI agent prompt injection

Blog post from WorkOS

Post Details
Company
Date Published
Author
Maria Paktiti
Word Count
2,257
Language
English
Hacker News Points
-
Summary

In June 2025, researchers at Aim Security revealed EchoLeak, a zero-click vulnerability in Microsoft 365 Copilot that allowed remote attackers to steal confidential data through hidden instructions in documents or emails, which Copilot processed to query internal files and exfiltrate results. This incident highlighted the broader risk of prompt injection in agentic systems, where attackers can manipulate AI agents into performing unauthorized actions beyond merely outputting text, such as querying databases or executing code. Unlike chatbots, agents have multiple output channels, increasing the attack surface. EchoLeak demonstrated the challenge of defending against indirect injection attacks, where attackers embed malicious instructions in content the agent is meant to process, posing a threat due to the trusted nature of such input channels. This vulnerability in agentic systems is further compounded by the ability of injected agents to propagate instructions across multi-agent systems, potentially leading to system-wide compromises. A meta-analysis found high success rates of adaptive attacks against best defenses, emphasizing the need for a layered defense strategy that contains rather than completely prevents prompt injection. Key defensive measures include scoped credentials, supply chain verification, invocation policy, and input-level defenses, all aimed at limiting the attack's blast radius and ensuring that any successful injection remains a contained incident.