The Bugcrowd Platform has announced updates to its Vulnerability Rating Taxonomy (VRT) to address emerging security challenges in Large Language Models (LLMs). The new version 1.12 introduces a new "AI Application Security" category and "Large Language Model (LLM) Security" subcategory, which provide definitions for vulnerabilities specific to LLMs, such as prompt injection, output handling, training data poisoning, excessive agency/permission manipulation, and more. These updates aim to align with industry-standard definitions, including the OWASP Top 10 for Large Language Model Applications, and will enable hackers to focus on hunting for specific vulns and creating targeted POCs, while program owners can design scope and rewards that produce the best outcomes. The updates are part of Bugcrowd's efforts to meet AI security goals in a scalable, impactful, and economically sensitive way.