Why Model-Level Defenses Aren’t Enough for MCP
Blog post from Descope
The Model Context Protocol (MCP), a standard for connecting AI systems with external tools and data, has rapidly gained popularity but faces significant security challenges similar to those of the early internet. Andre Landgraf from Databricks highlighted these issues during the Descope Global MCP Hackathon, demonstrating vulnerabilities such as instruction drift, context poisoning, and prompt injection, which threaten AI applications due to their natural language processing capabilities. These vulnerabilities demand architectural defenses beyond traditional model-level protections, advocating for enhanced authorization layers that control tool access as well as data access. Despite the promise of MCP in transforming workflows, the protocol's security architecture, including the adoption of OAuth 2.1 and fine-grained access controls, remains complex, requiring specialized expertise to ensure secure production deployments. Landgraf emphasizes the need for architecture that is inherently secure, as developers must anticipate sophisticated attacks while leveraging emerging specifications to build robust AI systems.