138 blog posts published by month since the start of 2022. Start from a different year:

Posts year-to-date
138 (0 posts by this month last year.)
Average posts per month since 2022
2.9

Post details (2022 to today)

Title Author Date Word count HN points
Outsmarting the Smart: Intro to Adversarial Machine Learning | Lakera – Protecting AI teams that disrupt the world. Brain John Aboze Nov 13, 2025 2481 -
Data Loss Prevention (DLP): A Complete Guide for the GenAI Era | Lakera – Protecting AI teams that disrupt the world. Lakera Team Nov 13, 2025 1700 -
What is In-context Learning, and how does it work: The Beginner’s Guide | Lakera – Protecting AI teams that disrupt the world. Deval Shah Nov 13, 2025 3442 -
Lakera’s Prompt Injection Test (PINT)—A New Benchmark for Evaluating Prompt Injection Solutions Lakera Team Nov 13, 2025 1225 -
Agentic AI Threats: Memory Poisoning & Long-Horizon Goal Hijacks (Part 1) | Lakera – Protecting AI teams that disrupt the world. Lakera Team Nov 13, 2025 1877 -
The Ultimate Guide to Deploying Large Language Models Safely and Securely | Lakera – Protecting AI teams that disrupt the world. Deval Shah Nov 13, 2025 4272 -
ML Model Monitoring 101: A Guide to Operational Success | Lakera – Protecting AI teams that disrupt the world. Armin Norouzi Nov 13, 2025 3452 -
Decoding AI Alignment: From Goals and Threats to Practical Techniques | Lakera – Protecting AI teams that disrupt the world. Haziqa Sajid Nov 13, 2025 1904 -
The Expanding Attack Surface of Multimodal LLMs and How to Secure It | Lakera – Protecting AI teams that disrupt the world. Pablo Mainar Nov 13, 2025 1223 -
Navigating the AI Regulatory Landscape: An Overview, Highlights, and Key Considerations for Businesses | Lakera – Protecting AI teams that disrupt the world. Lakera Team Nov 13, 2025 1589 -
Remote Code Execution: A Guide to RCE Attacks & Prevention Strategies | Lakera – Protecting AI teams that disrupt the world. Deval Shah Nov 13, 2025 4013 -
Shadow AI: Harnessing and Securing Unsanctioned AI Use in Organizations | Lakera – Protecting AI teams that disrupt the world. Haziqa Sajid Nov 13, 2025 2880 -
Securing AI Agents in Production: A Practical Guide - Nov 13, 2025 386 -
Evaluating Large Language Models: Methods, Best Practices & Tools | Lakera – Protecting AI teams that disrupt the world. Armin Norouzi Nov 13, 2025 4592 -
AI Observability: Key to Reliable, Ethical, and Trustworthy AI | Lakera – Protecting AI teams that disrupt the world. Brain John Aboze Nov 13, 2025 4573 -
What the AI Past Teaches Us About the Future of AI Security | Lakera – Protecting AI teams that disrupt the world. Mateo Rojas-Carulla Nov 13, 2025 901 -
The ELI5 Guide to Retrieval Augmented Generation | Lakera – Protecting AI teams that disrupt the world. Blessin Varkey Nov 13, 2025 2595 -
Investing in Lakera to help protect GenAI apps from malicious prompts Lakera Team Nov 14, 2025 229 -
Daniel Graf Joins Lakera as President Lakera Team Nov 14, 2025 520 -
Lakera Earns a Spot on the Financial Times' Tech Champions List for IT & Cyber Security Lakera Team Nov 14, 2025 540 -
Measuring What Matters: How the Lakera AI Model Risk Index Redefines GenAI Security Lakera Team Nov 14, 2025 1640 -
Introducing Lakera Chrome Extension - Privacy Guard for Your Conversations with ChatGPT Lakera Team Nov 14, 2025 915 -
Lakera is heading to Black Hat 2025 David Haber Nov 14, 2025 391 -
Lakera, which protects enterprises from LLM vulnerabilities, raises $20M Lakera Team Nov 14, 2025 228 -
Lakera Featured in 2025 Gartner Market Guide for AI Trust, Risk and Security Management (AI TRiSM) Lakera Team Nov 14, 2025 499 -
While GenAI Adoption Surges, Report Shows Security Preparedness Lags - Nov 14, 2025 900 -
Day Zero: Building a Superhuman AI Red Teamer From Scratch Mateo Rojas-Carulla Nov 14, 2025 1476 -
Lakera Named as Europe's Leader in AI Security by Sifted Lakera Team Nov 14, 2025 408 -
Comprehensive Guide to Large Language Model (LLM) Security Rohit Kundu Nov 14, 2025 6407 -
AI Red Teaming: Securing Unpredictable Systems Lakera Team Nov 14, 2025 2512 -
Lakera Wins the "Startups" Category at the DEKRA Award 2021 Lakera Team Nov 14, 2025 503 -
What Are AI Agents, and How Do They Work? Haziqa Sajid Nov 14, 2025 1733 -
Lakera Guard Expands Enterprise-Grade Content Moderation Capabilities for GenAI Applications Lakera Team Nov 14, 2025 573 -
Help Net Security Names Lakera as One of 2024's Cybersecurity Companies to Watch Lakera Team Nov 14, 2025 466 -
Claude 4 Sonnet: A New Standard for Secure Enterprise LLMs? Rob Parrish Nov 14, 2025 1248 -
The computer vision bias trilogy: Data representativity. Lakera Team Nov 14, 2025 840 -
Reinforcement Learning from Human Feedback (RLHF): Bridging AI and Human Expertise Deval Shah Nov 14, 2025 5584 -
AI Risks: Exploring the Critical Challenges of Artificial Intelligence Rohit Kundu Nov 14, 2025 8245 -
Releasing Canica: A Text Dataset Viewer Lakera Team Nov 14, 2025 918 -
Agentic AI Threats: Over-Privileged Tools & Uncontrolled Browsing (Part 2) Lakera Team Nov 14, 2025 2604 -
Stress-test your models to avoid bad surprises. Mateo Rojas-Carulla Nov 14, 2025 709 -
How Dropbox Uses Lakera Guard to Secure Their LLMs Lakera Team Nov 14, 2025 228 -
What Is AI Security? A Practical Guide to Securing the Future of AI Systems Lakera Team Nov 14, 2025 4258 -
The computer vision bias trilogy: Shortcut learning. Lakera Team Nov 14, 2025 743 -
Free of bias? We need to change how we build ML systems. Lakera Team Nov 14, 2025 1130 -
Why ML testing is crucial for reliable computer vision. Matthias Kraft Nov 14, 2025 1165 -
Why We Need OWASP's AIVSS: Extending CVSS for the Agentic AI Era Steve Giguere Nov 14, 2025 1300 -
Chatbot Security Essentials: Safeguarding LLM-Powered Conversations Emeka Boris Ama Nov 14, 2025 2282 -
What Is Content Moderation for GenAI? A New Layer of Defense Lakera Team Nov 14, 2025 2240 -
Lakera Featured in a NIST Report on AI Security Lakera Team Nov 14, 2025 359 -
Test machine learning the right way: Metamorphic relations. Lakera Team Nov 14, 2025 1069 -
Lakera's CEO Joins the Datadog Cloud Security Lounge Podcast to Talk about LLM security Lakera Team Nov 14, 2025 235 -
Social Engineering: Traditional Tactics and the Emerging Role of AI Rohit Kundu Nov 14, 2025 4838 -
The EU AI Act: A Stepping Stone Towards Safe and Secure AI Lakera Team Nov 14, 2025 646 -
Always active. All ways secure. Lakera unveils new branding. Lakera Team Nov 14, 2025 952 -
Introduction to Large Language Models: Everything You Need to Know for 2025 [+Resources] Avi Bewtra Nov 14, 2025 3853 -
Who Is Gandalf? The AI Challenge That Tests Your Prompting Skills Max Mathys Nov 14, 2025 2759 -
The computer vision bias trilogy: Drift and monitoring. Lakera Team Nov 14, 2025 604 -
AI Security by Design: Lakera's Alignment with MITRE ATLAS Lakera Team Nov 14, 2025 1986 -
Life vs. ImageNet Webinar: Lessons Learnt From Bringing Computer Vision to the Real World Lakera Team Nov 14, 2025 1920 -
From Regex to Reasoning: Why Your Data Leakage Prevention Doesn't Speak the Language of GenAI Lakera Team Nov 14, 2025 1943 -
Prompt Attacks: What They Are and What They're Not - Nov 14, 2025 335 -
Generative AI: An In-Depth Introduction Deval Shah Nov 14, 2025 3343 -
Introduction to Data Poisoning: A 2025 Perspective Lakera Team Nov 14, 2025 3108 -
Lakera Recognized in Gartner's GenAI Security Risks Report Lakera Team Nov 14, 2025 365 -
Test machine learning the right way: Detecting data bugs. Mateo Rojas-Carulla Nov 14, 2025 1197 -
Top 12 LLM Security Tools: Paid & Free (Overview) Deval Shah Nov 14, 2025 3984 -
Foundation Models Explained: Everything You Need to Know Deval Shah Nov 14, 2025 3581 -
Lakera Report: AI Adoption Surges, Security Preparedness Lags Behind David Haber Nov 14, 2025 1085 -
Gandalf the Red: Rethinking LLM Security with Adaptive Defenses Lakera Team Nov 14, 2025 1426 -
Microsoft Features Gandalf in Their Latest AI Security Toolkit Announcement Lakera Team Nov 14, 2025 567 -
The List of 11 Most Popular Open Source LLMs [2025] Armin Norouzi Nov 14, 2025 3549 -
The Ultimate Guide to Prompt Engineering in 2025 Lakera Team Nov 14, 2025 9147 -
Language Is All You Need: The Hidden AI Security Risk Lakera Team Nov 14, 2025 1945 -
The Backbone Breaker Benchmark: Testing the Real Security of AI Agents Lakera Team Nov 14, 2025 2245 -
Lakera Selected as a Swiss Startup to Keep an Eye on in 2024 Lakera Team Nov 14, 2025 514 -
LLM Monitoring: The Beginner's Guide Emeka Boris Ama Nov 14, 2025 3226 -
Reinforcement Learning: The Path to Advanced AI Solutions Deval Shah Nov 14, 2025 5054 -
Lakera Launches the AI Model Risk Index: A New Standard for Evaluating LLM Security Lakera Team Nov 14, 2025 770 -
Lakera at DEFCON31: Trends, Highlights & the State of AI Security Lakera Team Nov 14, 2025 1380 -
AI Risk Management: Frameworks and Strategies for the Evolving Landscape Lakera Team Nov 14, 2025 2375 -
Lakera Guard — Fall '25: Adaptive at Scale Lakera Team Nov 14, 2025 1002 -
AI Safety Unplugged: Key Takeaways and Highlights from the World Economic Forum Lakera Team Nov 14, 2025 898 -
LLM Vulnerability Series: Direct Prompt Injections and Jailbreaks Daniel Timbrell Nov 14, 2025 1349 -
AI Security Trends 2025: Market Overview & Statistics Haziqa Sajid Nov 14, 2025 2515 -
Why testing should be at the core of machine learning development. Lakera Team Nov 14, 2025 906 -
Embracing the Future: A Comprehensive Guide to Responsible AI Deval Shah Nov 14, 2025 3351 -
Regression Testing for Machine Learning: How to Do It Right Lakera Team Nov 14, 2025 1045 -
OpenAI's CLIP in production Daniel Timbrell Nov 14, 2025 494 -
Lakera Guard Enhances PII Detection and Data Loss Prevention for Enterprise Applications Lakera Team Nov 14, 2025 705 -
Jailbreaking Large Language Models: Techniques, Examples, Prevention Methods Blessin Varkey Nov 14, 2025 3414 -
Prompt Injection & the Rise of Prompt Attacks: All You Need to Know Sam Watts Nov 14, 2025 4181 -
Gandalf: Introducing a Sleek New UI and Enhanced AI Security Education Lakera Team Nov 14, 2025 1207 -
Gandalf: Agent Breaker—Think Like a Hacker, Prompt Like a Pro Lakera Team Nov 14, 2025 1289 -
How to Secure MCPs with Lakera Guard Santiago Arias Nov 14, 2025 1435 -
Medical imaging as a serious prospect: Where are we at? Lakera Team Nov 16, 2025 1376 -
Lakera snags $20 million to prevent business Gen AI apps from going haywire and revealing sensitive data Lakera Team Nov 16, 2025 236 -
LLM Hallucinations in 2025: How to Understand and Tackle AI's Most Persistent Quirk Lakera Team Nov 16, 2025 2342 -
Inside Agent Breaker: Building a Real-World GenAI Security Playground Lakera Team Nov 16, 2025 2261 -
Advancing AI Security With Insights From The World's Largest AI Red Team David Haber Nov 28, 2025 441 -
The AI Risk Map: A Practical Guide to Frameworks, Threats, and GenAI Lifecycle Risks - Nov 16, 2025 407 -
How to select the best machine learning models for computer vision? Matthias Kraft Nov 16, 2025 1412 -
Lakera Guard Expands Content Moderation Capabilities to Protect Your AI Applications and Users Lakera Team Nov 16, 2025 534 -
Zero-Click Remote Code Execution: Exploiting MCP & Agentic IDEs Lakera Team Nov 16, 2025 2431 -
Yahoo Finance Highlights Lakera's AI Model Risk Index Launch Lakera Team Nov 16, 2025 278 -
Aligning with the OWASP Top 10 for LLMs (2025): How Lakera Secures GenAI Applications Lakera Team Nov 16, 2025 2050 -
Fuzz Testing for Machine Learning: How to Do It Right Lakera Team Nov 16, 2025 1856 -
Securing the Future: Lakera Raises $20M Series A to Deliver Real-Time GenAI Security David Haber Nov 16, 2025 909 -
Introducing Custom Detectors: Tailor Your AI Security with Precision Lakera Team Nov 16, 2025 789 -
The Expanding Use of AI Chatbots in Business: Opportunities and Risks Haziqa Sajid Nov 16, 2025 2261 -
Announcing Lakera's SOC 2 Compliance Lakera Team Nov 16, 2025 643 -
No-Code GenAI Security with Lakera Policy Control Center Lakera Team Nov 16, 2025 937 -
Lakera Co-publishes Article in a Nature Journal on Testing Medical Imaging Systems Lakera Team Nov 16, 2025 543 -
Exploring the World of Large Language Models: Overview and List Brain John Aboze Nov 16, 2025 4694 -
Introducing Lakera Guard – Bringing Enterprise-Grade Security to LLMs with One Line of Code David Haber Nov 16, 2025 1232 -
Continuous testing and model selection with Lakera and Voxel51 Santiago Arias Nov 16, 2025 652 -
The Security Company of the Future Will Look Like OpenAI Mateo Rojas-Carulla Nov 16, 2025 1076 -
Before scaling GenAI, map your LLM usage and risk zones Lakera Team Nov 16, 2025 356 -
The Beginner's Guide to Visual Prompt Injections: Invisibility Cloaks, Cannibalistic Adverts, and Robot Women Daniel Timbrell Nov 16, 2025 1470 -
Not All mAPs are Equal and How to Test Model Robustness Mateo Rojas-Carulla Nov 16, 2025 1906 -
How to Secure Your GenAI App When You Don't Know Where to Start Lakera Team Nov 16, 2025 1019 -
The Rise of the Internet of Agents: A New Era of Cybersecurity David Haber Nov 16, 2025 1553 -
A Comprehensive Guide to Data Exfiltration Brain John Aboze Nov 16, 2025 5643 -
Cursor Vulnerability (CVE-2025-59944): How a Case-Sensitivity Bug Exposed the Risks of Agentic Developer Tools Lakera Team Nov 16, 2025 1231 -
3 Strategies for Making Your ML Testing Mission-Critical Now Lakera Team Nov 16, 2025 715 -
Lakera CEO Joins Leaders from Meta, Cohere and MIT for AI Safety Session at AI House Davos Lakera Team Nov 16, 2025 602 -
What Is Personally Identifiable Information (PII)? And Why It's Getting Harder to Protect Lakera Team Nov 16, 2025 2215 -
2025 GenAI Security Readiness Report: A Clearer Picture of Where Enterprises Stand Lakera Team Nov 16, 2025 750 -
Lakera Raises $20M Series A to Secure Generative AI Applications Lakera Team Nov 16, 2025 1273 -
GamesBeat: Lakera launches hacking sim Gandalf: Agent Breaker Lakera Team Nov 16, 2025 281 -
Your validation set won't tell you if a model generalizes. Here's what will. Václav Volhejn Nov 16, 2025 1454 -
Lakera and Cohere Set the Bar for New Enterprise LLM Security Standards Lakera Team Nov 16, 2025 839 -
DEFCON Welcomes Mosscap: Lakera's AI Security Game to Tackle Top LLM Vulnerabilities Lakera Team Nov 16, 2025 562 -
The Ultimate Guide to LLM Fine Tuning: Best Practices & Tools Armin Norouzi Nov 16, 2025 4066 -
David Haber, Lakera's CEO, and Elias Groll from CyberScoop Discuss AI Security in a Safe Mode Podcast Episode Lakera Team Nov 16, 2025 297 -
OWASP Global AppSec DC 2025: Notes From the Breaker Track Steve Giguere Nov 28, 2025 1502 -
What the New MCP Specification Means to You, and Your Agents Steve Giguere Nov 28, 2025 1882 -
Indirect Prompt Injection: The Hidden Threat Breaking Modern AI Systems Lakera Team Nov 28, 2025 4189 -