Content moderation is evolving with the rise of generative AI (GenAI), which shifts the focus from post-publication policing of user-generated content to real-time moderation of AI-generated content. Traditional methods relying on keywords and human review are inadequate for the nuanced and multilingual outputs of large language models (LLMs), which can produce unpredictable and potentially harmful content. This new paradigm requires proactive interception of inappropriate or biased material at the point of generation, using advanced tools like Lakera Guard, which provides real-time, policy-driven moderation specifically designed for LLMs. This approach not only prevents risks such as prompt injections and evasive phrasing but also supports compliance with content standards, ensuring the safety and reliability of AI-powered applications. As companies like Dropbox integrate these solutions, the shift toward embedding moderation within the generation layer itself is becoming essential for maintaining trust and accelerating innovation in AI products.