The key point of this text is that while anthropomorphizing AI tools with fun personalities may seem appealing, it can be misleading and potentially harmful. The author argues that customers rely on AI assistants for support rather than entertainment, and that giving them a personality can lead to manipulation and harm. Chatbot humor can also fall flat and may not translate well across languages or cultures. Instead of anthropomorphizing AI tools, conversation designers should focus on providing clear cues that remind users that the assistant is just a tool, and that it's essential to maintain healthy emotional distance when interacting with them. The author suggests drawing clear boundaries around the use of AI assistants to avoid giving them too much power over us.