Online communication platforms like Telegram, Reddit, and Discord have facilitated global connectivity but also face challenges such as toxic language, which can drive users away. To tackle this issue, Jigsaw and Google's Counter Abuse Technology team developed the Perspective API, a machine learning tool that identifies toxic language across multiple languages by assessing comments for attributes like profanity, threats, and insults. While the API aids in moderation, human oversight is necessary due to potential misclassifications. The adoption of Perspective API by major platforms like Reddit and The New York Times illustrates its effectiveness. The n8n community utilizes this API to ensure respectful communication by integrating it into workflows, such as a Telegram bot that detects and responds to toxic messages. Through a series of steps involving Telegram and Google Perspective nodes, users can create workflows to monitor and address toxicity in messages with actions like sending warnings or escalating issues. This tutorial demonstrates the potential to adapt such workflows for various platforms, highlighting the broader application of automated moderation tools in maintaining healthy online environments.