Closing the Loop: Coding Agents, Telemetry, and the Path to Self-Improving Software
Blog post from Arize
In 2025, the rapid adoption of coding agents—tools that autonomously write, test, and debug software—has transformed software development, with tools like Claude Code, Codex, Cursor, and Open Code producing significant portions of code contributions. This shift towards agent-assisted coding has outpaced the existing infrastructure, challenging traditional software engineering practices. Coding agents now operate in an environment where software behavior is influenced by interactions between code, machine learning models, and natural language prompts, necessitating new infrastructure that supports this dynamic. A coding agent's efficacy depends on its harness, which provides the necessary structure and feedback mechanisms for reliable operation, emphasizing the importance of telemetry and trace access for understanding and improving agent behavior. As coding agents move towards full autonomy, the focus shifts from writing code to designing environments, establishing feedback loops, and ensuring robust verification mechanisms. Organizations that integrate telemetry and evaluation into their workflows will better manage agent-driven development, ensuring that coding agents function as active participants in the development process rather than as isolated code generators.