Generative AI chatbots like ChatGPT present significant trust and safety challenges due to their capacity to produce content that can be perceived as authoritative and reliable, despite not always being accurate or safe. These AI systems, often likened to a "monkey's paw" AGI, require careful moderation to prevent the dissemination of harmful or misleading information. The article discusses the complexities of moderating these systems, highlighting the need for robust guardrails and disclaimers, especially when the AI output resembles authoritative advice. Additionally, the text explores the environmental impact of training AI models, suggesting carbon taxing as a potential regulatory measure. The phenomenon of AI "hallucinations," where the system generates false or misleading information, poses reputational and financial risks, necessitating the use of external moderation tools to mitigate these issues. Ultimately, while generative AI presents new challenges, it also offers an opportunity to develop moderation strategies as a precursor to managing more advanced AI technologies in the future.