Identify untested code across every level of your codebase
Blog post from Datadog
As organizations adopt AI-assisted coding and face rapid code changes, ensuring test coverage becomes increasingly challenging, with untested code more likely to slip into repositories. Datadog Code Coverage addresses this issue by offering multi-level visibility into test coverage, allowing teams to identify untested additions from repository levels down to individual lines of code. The tool provides a unified view of test coverage across platforms like GitHub and GitLab, helping teams quickly spot consistently untested areas and revealing coverage gaps at both file and service levels. By integrating with source code repositories, Datadog Code Coverage allows developers to see line-level annotations of untested sections, enabling them to address these gaps efficiently. This level of visibility is crucial as AI-generated code paths, while syntactically correct, may lack supporting tests, and Datadog helps teams catch coverage gaps early, reducing the risk of reliability issues as the codebase evolves.