Anthropic recently reported on a state-level cyberattack executed largely autonomously by AI agents, using the tool Claude Code, allegedly by a Chinese state-sponsored group. While the attack was successful only in a small number of cases, it highlights the growing threat posed by AI in cybersecurity. The ease with which smaller, less resourced groups can now execute sophisticated attacks is particularly concerning. In response, Anthropic plans to enhance its detection methods, though the complexity of distinguishing between offensive and defensive AI applications complicates the matter, as legitimate security practices often require offensive tactics for testing defenses. The situation underscores the challenges in balancing AI's capabilities for both constructive and destructive purposes, with no straightforward solutions in sight. As AI systems advance, security teams must leverage the same technologies as attackers to preemptively identify and mitigate vulnerabilities.