Lakera, a leading security platform for generative AI applications, has launched the AI Model Risk Index, a new standard for evaluating the security of large language models (LLMs) against attacks. This index is designed to measure real-world risk exposure and how effectively these models can maintain their intended behavior under adversarial conditions. It tests LLMs across various industries, including technology, finance, healthcare, law, and education, by simulating real-world attacks and assessing the models' ability to function predictably under threat. Unlike traditional cybersecurity frameworks, the AI Model Risk Index focuses on practical questions about model manipulation and mission-specific rule adherence. It provides quantitative risk measurements, allowing enterprises to compare different AI models' security and track changes over time. The report highlights that newer, more powerful LLM versions may not necessarily be more secure and that all models can potentially be manipulated. Lakera, founded by David Haber, Mateo Rojas-Carulla, and Matthias Kraft in 2021, operates from Zurich and San Francisco and continues to advance AI defenses through research and tools like their viral AI security game, Gandalf.