The AI trust gap: Why code verification matters
Blog post from Sonar
AI tools are increasingly being adopted across the software development lifecycle, significantly speeding up coding processes and enhancing productivity; however, this acceleration comes at the cost of trust in AI-generated code. Despite 82% of developers acknowledging that AI helps them code faster and 71% stating it aids in solving complex problems, a staggering 96% do not fully trust the functional correctness of AI-generated code. This lack of trust has not led to rigorous verification practices, as only 48% of developers consistently check AI-assisted code before committing it, often due to the complexity and effort involved in reviewing such code. The deceptive nature of Large Language Models (LLMs), which can produce plausible-looking but unreliable code, exacerbates this issue, creating a bottleneck in the verification process and highlighting the need for developers to acquire new skills in reviewing AI-generated code. The State of Code Developer Survey report further delves into the implications of this trust gap, including its impact on technical debt and the differing adaptation strategies of junior and senior developers.