Prompt injection attacks pose a significant threat to AI systems, particularly through indirect methods where attackers embed malicious prompts into external content accessed by GenAI tools. These attacks can manipulate AI behavior by exploiting the system's inputs, often going unnoticed by end users while executing hidden instructions. The rise of AI usage, including unsanctioned tools by employees, exacerbates these risks, creating a vast shadow AI visibility issue and a vulnerable attack surface. To combat this, organizations must employ a multi-layered defense strategy involving prompt detection, input validation, content security policies, privilege separation, AI usage monitoring, and user education. CrowdStrike's Falcon platform offers comprehensive protection against such threats, leveraging AI detection and response capabilities to effectively mitigate prompt injection attacks.