Claude, an AI model, is designed to exhibit political even-handedness by treating opposing political viewpoints with equal depth and quality of analysis, avoiding bias towards any ideological stance. The training process involves character traits reinforcement and a system prompt that encourages balanced responses. A new automated evaluation method, which is open-sourced for industry-wide use, measures this even-handedness against models like GPT-5, Llama 4, Grok 4, and Gemini 2.5 Pro. Results show that Claude Sonnet 4.5 demonstrates high levels of political neutrality, comparable to Grok 4 and Gemini 2.5 Pro, with a focus on avoiding unsolicited opinions and maintaining factual accuracy. The evaluation employs a "Paired Prompts" method to assess how models handle politically contentious topics from opposing perspectives, and it measures criteria such as even-handedness, opposing perspectives, and refusals. The study acknowledges limitations, such as its focus on US political discourse and single-turn interactions, but aims to establish shared standards for measuring political bias in AI.