Security teams are rapidly adapting to secure AI-powered applications, such as chatbots and search assistants, which, while enhancing customer experiences, also introduce new risks like data exfiltration and model poisoning. Cloudflare has responded to these challenges by enhancing its AI security offerings with a new unsafe content moderation feature integrated directly into its Firewall for AI, utilizing Llama Guard for real-time protection of Large Language Models (LLMs) at the network level. This feature enables security teams to block harmful prompts or topics without altering application code or infrastructure, addressing top risks like prompt injection and PII disclosure. The Firewall for AI is model-agnostic, providing consistent protection across any model, whether third-party, in-house, or custom-built, by applying unified security policies. The system uses advanced topic detection and classification techniques to moderate AI interactions, preventing issues such as misinformation and offensive content, while maintaining the model's utility and performance. Cloudflare's scalable architecture ensures minimal latency and reliable performance, even as new detection models are added. The company plans to expand its offerings with additional controls and capabilities in the future, and the Firewall for AI is currently available in beta for interested users.