Understanding and Protecting Against LLM10: Unbounded Consumption
Blog post from StackHawk
Unbounded consumption in large language model (LLM) applications poses significant security and financial risks due to the high computational demands these models require. Attackers can exploit vulnerabilities in LLMs by submitting resource-intensive queries or crafting inputs designed to maximize computational load, leading to service disruptions, excessive cloud costs, or intellectual property theft. This type of attack, identified as LLM10: Unbounded Consumption by the OWASP Top 10 for Large Language Model Applications (2025), highlights the necessity for comprehensive resource controls, such as input validation and cost monitoring, to prevent unauthorized resource use and maintain service integrity. Developers often mismanage LLMs by treating them like traditional APIs without accounting for their resource intensity, resulting in vulnerabilities that can be exploited through various attack vectors. Implementing layered protection strategies, such as pre-ingress controls, gateway controls, inference controls, and post-inference monitoring, is essential to safeguard LLM applications from these unique threats. Additionally, tools like StackHawk can help developers test for and mitigate vulnerabilities specific to AI applications, ensuring robust protection against unbounded consumption and other OWASP LLM Top 10 risks.