CodeQL team uses AI to power vulnerability detection in code
Blog post from GitHub
Artificial Intelligence (AI) is transforming technology and security, and GitHub is utilizing AI to enhance both software development speed and security. GitHub Copilot includes a security filter to prevent common coding vulnerabilities, while the CodeQL team uses AI to optimize their modeling process for detecting vulnerabilities. By employing Large Language Models (LLMs), they've automated API modeling, significantly reducing false negatives and enhancing CodeQL's detection capabilities. This approach recently led to the discovery of a new CVE in Gradle. GitHub continues to integrate AI into security testing, aiming to improve security offerings and enable scalable variant analysis with tools like Multi-Repository Variant Analysis (MRVA), which allows for extensive code scanning across repositories.