Google's Gemini 2.5 Pro represents a notable advancement in multimodal AI, boasting capabilities such as extended context processing, multimodal vulnerabilities, enhanced reasoning, and function calling. These features present unique security challenges, prompting the use of tools like Promptfoo for systematic vulnerability testing through adversarial red teaming. The guide outlines procedures for setting up a testing environment, generating test cases, executing evaluations, and analyzing vulnerability reports, emphasizing the importance of thorough security testing before deployment. Security considerations include potential attack surfaces in extended token contexts, multimodal processing vulnerabilities, denial-of-service risks in enhanced thinking modes, and the real-world implications of function calls. The guide also covers benchmarking against other models and suggests custom test cases for Gemini-specific features. It provides insights into ensuring secure function-calling capabilities and compliance with security frameworks such as OWASP, NIST, and the EU AI Act. Users are encouraged to integrate regular testing and monitoring into their CI/CD pipelines to track and improve security over time.