Release: Editable LLM Prompts
Blog post from Stream
Stream has introduced AI Text Harm Detection, a feature that enhances moderation capabilities by allowing users to define, edit, and manage moderation guidelines directly from the Stream dashboard. This innovation offers full flexibility in specifying harms such as hate speech and self-harm, and improves detection accuracy through LLM-powered technology. The feature is being rolled out to existing customers in phases and is automatically available for new users, enabling faster responses to harmful content and reducing dependency on backend updates. By putting moderation control in users' hands, Stream aims to align moderation practices with specific community needs, allowing for quick adaptation to emerging risks. This initiative marks the beginning of Stream's journey in AI moderation, with plans for continued expansion and performance refinement guided by user feedback.