The confused deputy problem is a significant risk in agentic AI systems, where multiple agents interact with each other to produce a result. This problem occurs when a user or machine tricks a higher-privileged entity into exposing sensitive data or performing an unauthorized action. In multi-agent generative AI workflows, the risk of confused deputy attacks increases due to the interconnected and complex nature of these systems. To mitigate this risk, organizations need to adopt dynamic environments with automated workflows, infrastructure as code, and identity-based security. This enables them to quickly take action in case of a problem, tear down and destroy environments, and build them up again with tighter controls. The use of automation can significantly improve mean time to resolve (MTTR) and reduce the attack surface, while improving the cost and risk profile of everything that is done with AI.