Building the MCP Search Tool for any model
Blog post from Warp
Warp has optimized its agent to use the MCP protocol more efficiently, resulting in a 26% reduction in token usage for tasks that rely on MCP context, and a 10% reduction when MCP context is available but unused, without sacrificing quality across supported AI models. MCP, a standard for LLM agents' interaction with user-configurable servers, enables access to external tools and resources, such as issue tracking or database querying. Warp's enhancements include a model-agnostic MCP search subagent that dynamically identifies and utilizes relevant tools and resources, addressing previous inefficiencies caused by preloading all MCP tools, which could bloat context windows with unnecessary data. The new approach reduces conversation costs and maintains effectiveness, as tested by an LLM-as-judge evaluation, while being automatically available to all MCP users in Warp. The improvements, spearheaded by the AI Quality Team, are designed to minimize user costs and maximize conversation quality, reinforcing MCP's role as a standard protocol for retrieving external context.