Company
Date Published
Author
Lakera Team
Word count
534
Language
-
Hacker News points
None

Summary

Lakera has announced updates to their Lakera Guard content moderation tool, enhancing its ability to detect and prevent violent, dangerous, and illicit content to ensure AI applications remain secure and compliant. The update improves detection across categories like violence, self-harm, illicit activities, and discussions of firearms and dangerous weapons, all while maintaining performance efficiency with minimal latency. The customizable nature of the new detectors allows users to tailor the moderation to their specific needs, offering a robust safety net for both public-facing and enterprise AI platforms. These advancements are part of Lakera's broader efforts to enhance AI security, with the company also offering expert-recommended policies to further protect AI applications.