The zero-trust agent: why your AI needs a sandbox, not a blank check
Blog post from Upsun
Upsun introduces a "zero-trust" framework for AI agents by utilizing isolated, production-perfect preview environments to mitigate security risks associated with giving AI unrestricted access to cloud infrastructure. This approach addresses the dangers of standard AI integrations, which often require high-privilege tokens that can lead to catastrophic configuration changes if mishandled by AI. By employing environment-level scoping and container isolation, Upsun allows AI agents to propose and test changes in secure, isolated clones of production environments, ensuring that experiments do not impact the live site. This method fosters graduated trust, requiring AI agents to prove their logic in a sandbox before being granted permissions to modify the production environment. The platform supports a "propose-and-test" workflow where AI suggestions are validated in a byte-level clone of the production setup, and only successful changes are reviewed and merged by human teams. Upsun's declarative, Git-driven approach ensures every action by an AI is version-controlled and auditable, balancing AI autonomy with governance to support sustainable, high-velocity innovation.