We're excited to announce our partnership with Meta to make Llama Guard available through the Together Platform, allowing users to leverage an LLM-based input-output safeguard model to moderate any open model hosted on the platform. Llama Guard is a competitively performing model that provides developers with a pretrained solution to defend against generating potentially risky outputs. It can be used as a standalone classifier or as a filter to safeguard responses from 100+ models, and is also available in our playground for testing and experimentation.