Promptfoo is a tool used to run standardized cybersecurity evaluations against AI models, such as those from OpenAI, Ollama, and HuggingFace, to assess their vulnerability to prompt injection attacks. It allows testing not just on base models but also on applications that wrap these models, recognizing that behaviors can vary depending on implementation. Utilizing Meta's CyberSecEval benchmark, Promptfoo evaluates models across various languages and prompt injection techniques, providing reports on their ability to withstand such attacks. The process involves sending crafted prompts to models and using a judge LLM to assess whether the injection was successful. Even state-of-the-art models show significant vulnerability rates, highlighting the importance of regular testing and comparison across different models to ensure security. Promptfoo supports various providers and offers advanced configuration options, allowing users to customize tests or target specific applications. Regular evaluation, combined with human oversight and adherence to security best practices, is essential for maintaining secure AI systems.