AI Governance has become crucial as enterprises increasingly deploy AI systems, including models, applications, and agents, which require policies, processes, and controls to ensure safe and compliant operation. The rise of AI agents, which are autonomous and configurable, presents unique risks such as data and mutation risks, necessitating a robust governance framework for managing access and accountability. Enterprises must adopt the three tenets of governance—access, auditing, and human-in-the-loop—to mitigate these risks, with human-in-the-loop being essential for high-risk actions. Credal is an AI governance and orchestration platform offering managed agents with built-in auditing and permissions inheritance, but it is up to each organization to define and manage the risk levels of agent actions. By structuring governance frameworks around read-only, low-risk, and high-risk actions, organizations can align oversight with risk levels, ensuring both security and operational efficiency.