Understanding LLM Security Risks: OWASP Top 10 for LLMs (2025)
Blog post from StackHawk
The rapid adoption of Large Language Models (LLMs) following the introduction of ChatGPT has exposed significant security challenges, as traditional application security frameworks are not equipped to handle the probabilistic nature of LLMs. In response, OWASP released its Top 10 list for LLM applications, identifying critical security risks such as prompt injection, sensitive information disclosure, supply chain vulnerabilities, data and model poisoning, improper output handling, excessive agency, system prompt leakage, vector and embedding weaknesses, misinformation, and unbounded consumption. These risks highlight the unique attack surfaces of LLMs, including unpredictability in outputs, the value of training data, and the complexity of context-aware systems like RAG architectures. The 2025 update to the OWASP list reflects new vulnerabilities as LLMs become more integrated into production systems, emphasizing the need for AI-specific defenses alongside traditional security practices. Ensuring robust security for AI-powered applications requires a defense-in-depth approach, considering adversarial testing, input validation, output sanitization, and proper authorization for AI agents to mitigate these emerging threats effectively.