The Gandalf project is a gamified approach to AI security that exposed the vulnerabilities of large language models (LLMs) through prompt injection. The game, designed by Lakera AI's Max Mathys, attracted hundreds of users and generated over 40 million prompts, revealing how easily LLMs can be manipulated through cleverly crafted text. The project found that many user prompts were successful attacks that bypassed the LLM's defenses, highlighting the critical need for powerful AI security measures. Vector databases play a crucial role in improving AI security by providing efficient storage, indexing, and retrieval of vector embeddings, which enable various security applications such as analyzing attack patterns, detecting anomalies, and improving the performance of security models. The project showed that basic security measures like simple prompt engineering are not enough to stop these attacks, even more advanced defenses like using an LLM judge proved vulnerable.