Fuzzing has long been a standard technique in pentesting, involving the injection of malformed or unexpected inputs to identify application weaknesses, and is particularly effective for web applications in identifying vulnerabilities like SQL injections and buffer overflows. However, when it comes to testing Large Language Model (LLM) applications, traditional fuzzing methods fall short due to the unique and expansive attack surface presented by LLMs, which includes the entire language they are trained on and varies according to their specific use cases. Unlike web applications where static payloads can be iterated to find vulnerabilities, LLMs require dynamic, tailored probes that cater to the application's business logic and specific harm categories. Tools like Promptfoo, which generate adversarial probes based on specific use cases, offer a more effective approach than static payloads for uncovering vulnerabilities in LLMs, and are adaptable to the latest attack methods. The key to successful LLM application testing lies in creating dynamic probes that are customized for the system's purpose, which can lead to more accurate vulnerability assessments compared to traditional fuzzing techniques.