Smithery AI provides a centralized hub for Model Context Protocol (MCP) servers, allowing large language model clients to discover, install, and manage external servers to access new tools or APIs. MCP servers can be deployed in two modes: Hosted/Remote, where they run on Smithery's infrastructure accessed via the web, and Local, which requires installing the server locally via the Smithery CLI. For local installations, users supply tokens through a command and configure the server to read from environment variables or a configuration file. Security is emphasized, with tokens stored locally for hosted MCPs, while untrusted fields should be avoided. The platform's data policy ensures minimal storage of usage data on Smithery's side, with tokens being ephemeral in both local and hosted scenarios.