GitHub has integrated machine learning (ML) into its code scanning capabilities to enhance the detection of security vulnerabilities in software code. This advancement leverages the CodeQL analysis engine, which constructs a relational representation of code to identify potential issues through specialized queries. Many security vulnerabilities arise from untrusted user data being misused, and CodeQL queries help identify such risks by modeling known patterns and libraries. However, manual modeling can be labor-intensive and limited, prompting the use of ML to train models on examples identified by manual queries. These models are designed to recognize vulnerabilities even in unfamiliar libraries by processing features extracted from code snippets. The ML models are trained using a substantial dataset labeled by older versions of CodeQL queries, which helps them predict new vulnerabilities not detected by manual methods. The system allows repository owners to enable ML-generated alerts, which are marked as "Experimental" and can be filtered accordingly. Initial evaluations show the models achieve approximately 80% recall and 60% precision in identifying true positives missed by manual queries, with ongoing efforts to extend these capabilities to more programming languages and improve performance.