AI agents are increasingly used in enterprise settings for automation and decision-making, but their autonomous nature introduces security challenges that traditional systems aren't equipped to handle. Communication protocols like the Model Context Protocol (MCP) and Agent-to-Agent (A2A) have been developed to standardize interactions, yet they largely leave authorization as an implementation-specific concern, relying on broad token mechanisms that lack granularity. This can lead to security vulnerabilities such as the Insufficient Granularity of Access Control, privilege persistence, and unauthorized access due to inadequate token management and revocation propagation. The A2A protocol, for example, uses broad JSON-RPC scopes and lacks a defined mechanism for user consent, which can lead to unauthorized data propagation and consent fatigue. While existing protocols recommend best practices for authorization, they do not enforce them, leaving agentic systems potentially unsafe without a centralized authorization layer to manage permissions dynamically and securely.