Codex Models Now Available in Warp
Blog post from Warp
Warp has integrated the latest OpenAI Codex models, which offer enhanced coding performance compared to core GPT models, following significant demand despite earlier challenges that negatively impacted integration. The recent improvements in Codex have allowed it to function effectively within Warp’s environment, achieving state-of-the-art performance in coding tasks with a 3-5% boost over standard GPT models. Key to this integration was the support for the applypatch tool's V4A patch format, improving file editing reliability, especially in complex refactoring across multiple files. This necessitated comprehensive changes to Warp's stack to accommodate features such as file deletions and renames, revealing an issue in Codex CLI’s V4A patch parser, which Warp reported for correction. Additionally, prompt-level adjustments, including tool renaming for better alignment and the removal of preamble prompts, stabilized tool invocation, reducing error rates in long coding sessions. Warp continues to support alternative models for seamless fallback and invites user feedback to further optimize Codex's use within its workflows.