Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards
Blog post from Lakera
Lakera and Cohere have partnered to establish new security standards for Large Language Models (LLMs) amidst growing concerns about cybersecurity threats like prompt injection attacks, data leaks, and toxic language output. Recognizing that these issues affect not only LLM providers but also app developers and end-users, the two companies have collaborated to create resources such as the LLM Security Playbook and the Prompt Injection Attacks Cheatsheet. Their joint efforts include red-teaming activities to identify vulnerabilities in LLMs, allowing them to formulate strategies to mitigate potential risks. Both teams have participated in initiatives like DEFCON31's Generative Red Teaming AI Challenge, emphasizing the importance of community-wide collaboration to enhance the security of AI applications. Cohere, known for its enterprise AI platform, and Lakera, a leader in AI security solutions, are committed to refining security practices and staying abreast of emerging threats to ensure LLMs are deployed safely at scale.