Data breaches have significant repercussions for organizations, as demonstrated by high-profile incidents involving Equifax and Meta, where personal identifiable information (PII) was exposed, leading to financial losses and loss of public trust. PII includes data such as names, addresses, emails, and biometric information, which are prevalent in enterprise datasets and critical for AI solutions. Ensuring the security of PII is vital for compliance and maintaining customer trust. Traditional methods like regex are often insufficient for detecting PII due to their inability to understand context and language nuances, leading to false positives and negatives. In contrast, large language models (LLMs) offer a more sophisticated approach, capable of understanding context, learning from diverse data, and adapting to new PII forms, offering a more efficient and accurate solution. By leveraging LLMs, organizations can enhance PII detection and management, ensuring privacy compliance while maximizing the utility of their data.