What Is Personally Identifiable Information (PII)? And Why It's Getting Harder to Protect
Blog post from Lakera
Personally Identifiable Information (PII) is becoming increasingly difficult to protect in the era of Generative AI (GenAI), as traditional data protection methods prove insufficient. Unlike past approaches that focused on encrypting and securing stored data, GenAI systems can inadvertently generate, infer, or expose PII in their outputs, creating new vulnerabilities. This shift challenges legacy Data Loss Prevention (DLP) tools, which rely on pattern recognition and fail to detect nuanced, multilingual, or contextual disclosures. Lakera Guard offers a real-time solution by analyzing model interactions to identify both direct and indirect PII across languages, adapting to the complexities of natural language. The necessity for real-time, context-aware defenses is highlighted by incidents like the Samsung source code leak and findings that a significant portion of AI prompts contain sensitive information. As PII increasingly resides in language rather than fixed formats, security approaches must evolve to monitor and mitigate these new risks.