Fixing security vulnerabilities with AI
Blog post from GitHub
Copilot Autofix has become generally available, offering AI-powered suggestions for fixing security vulnerabilities detected by GitHub's code scanning tool, starting with JavaScript and TypeScript. The tool leverages CodeQL, GitHub's semantic code analysis engine, to identify vulnerabilities and uses a large language model (LLM) to generate suggested code edits. The autofix feature presents problems and fix strategies in natural language, allowing developers to commit, dismiss, or modify suggestions directly within pull requests. The underlying process involves constructing LLM prompts from CodeQL alerts, which include relevant code snippets and detailed instructions for fixing vulnerabilities, and processing model responses with post-processing heuristics to ensure accuracy. GitHub has implemented a robust evaluation framework to test and improve autofix, achieving significant success rates while reducing computational costs. Users benefit from an integrated experience without needing to adjust their workflows, as suggested fixes appear alongside code scanning alerts. GitHub emphasizes security and privacy, ensuring safeguards against potential AI-related risks, and collects anonymized telemetry to refine autofix's utility as it expands to more languages and use cases.