The 2026 State of AI-Era AppSec: Key Findings from Our Survey
Blog post from StackHawk
AI-driven development has rapidly become a mainstream practice, challenging application security (AppSec) teams to adapt to new realities where AI coding assistants like GitHub Copilot are widely adopted by 87% of organizations, though the security implications of AI-generated code remain debated with mixed perceptions of risk. AppSec teams, often composed of four or more members and spanning various industries, are struggling with the balance between maintaining development velocity and ensuring security, as AI-driven coding increases code volume significantly without a corresponding increase in security resources. Despite the widespread use of security testing tools, such as Software Composition Analysis and API Protection, teams face challenges in addressing new risks like AI-specific vulnerabilities and are spending substantial time triaging issues, leading to alert fatigue. The growing complexity and speed of development have created visibility and accountability gaps, with boards increasingly demanding insights into risk postures, yet teams often report activity metrics rather than risk-oriented metrics. To address these challenges, organizations are investing in AI/LLM security strategies, emphasizing the need for better visibility, runtime testing, and a shift towards risk-based metrics to provide a clearer understanding of security posture and effectiveness in the evolving AppSec landscape.