This guide outlines the process of using Promptfoo for adversarial testing, or red teaming, to evaluate the safety and security of models hosted on Ollama, with Llama 3.2 3B as an example. It provides step-by-step instructions for setting up the testing environment, configuring the Ollama model as the target, and specifying the purpose and plugins for vulnerability testing. The guide emphasizes the importance of defining high-quality purposes to generate effective adversarial tests, and it includes plugins to test for issues such as harmful content, PII leakage, false information, and model impersonation. Strategies like jailbreak and composite jailbreak are used to deliver adversarial inputs, and the testing process involves generating and running test cases, grading responses, and analyzing vulnerabilities through a detailed report. The guide also highlights common vulnerabilities in Llama models, such as prompt injection and harmful content generation, and suggests mitigation strategies like adding safety constraints, input validation, and output filtering.