The State of AI-Native Application Security 2025 report, based on a survey of 500 security practitioners from the US, UK, Germany, and France, highlights the rapid integration of AI in software development and the resulting security challenges. In 2025, AI-generated code and workflows have become integral, significantly increasing the volume of code and, consequently, the potential for vulnerabilities. While AI hasn't inherently made coding less secure, the sheer amount of code creates more opportunities for security issues, especially since traditional security tools can't keep pace with the constantly evolving AI models. A significant concern is the lack of visibility into AI's deployment within organizations, with 63% of practitioners unaware of where AI is being utilized, highlighting a gap in governance and oversight. The report emphasizes the need for security to be integrated into the development pipeline, advocating for continuous testing and validation to keep up with AI's speed. It suggests that developers require supportive measures, such as automated security checks, to innovate safely without slowing down. The integration of AI presents an opportunity to address existing chaos by improving visibility and governance throughout the workflow, ultimately aligning security and development towards shared objectives.