Lakera has been featured in a NIST report titled "Adversarial Machine Learning - A Taxonomy and Terminology of Attacks and Mitigations," which examines adversarial machine learning with a focus on classifying attacks and mitigation strategies. The report highlights Lakera’s effective techniques for defending large language models (LLMs) against prompt injection attacks, underscoring the importance of robust AI security measures. This recognition by NIST validates Lakera's work in AI security and serves as a vital resource for professionals aiming to secure AI systems against various adversarial challenges. Additionally, Lakera's team has played a role in enhancing Dropbox's GenAI initiatives and offers expert-recommended policies for securing GenAI applications, enabling users to implement protections efficiently.