As enterprise AI teams increasingly rely on customer data, biometrics, and user-generated content to enhance business processes, the need to effectively identify and remove personally identifiable information (PII) from datasets becomes crucial to prevent data leaks and protect privacy. Traditional methods such as regular expressions can be unreliable, particularly with unstructured text data; however, leveraging large language models (LLMs) offers a more precise and efficient solution for PII detection and extraction. The process involves creating prompts that guide the LLM to identify specific types of PII, such as names, email addresses, and social security numbers, thereby simplifying the task compared to setting up complex regex expressions. The use of LLMs, like GPT-4, shows promise in advancing data privacy without compromising data value, although integrating these models into existing AI infrastructures poses a challenge. Labelbox offers tools to streamline this process, allowing AI teams to explore, compare, and fine-tune foundation models for efficient PII management. This approach marks a significant step forward in the intersection of AI and data governance, fostering innovation while ensuring ethical and legal compliance in data management.