At the MIT EmTech Digital conference, a panel discussion focused on AI safety strategies, featuring leaders from ElevenLabs, the Alan Turing Institute, the Ada Lovelace Institute, and BT. ElevenLabs emphasized a three-pronged approach to AI safety, highlighting provenance, traceability, and moderation to manage AI-generated content. Provenance involves distinguishing AI-generated content from real content using classifiers and industry standards like C2PA. Traceability ensures accountability by linking AI-generated content to individual users, while moderation involves automated systems and human oversight to enforce content policies. ElevenLabs collaborates with various stakeholders to address AI safety challenges and foster a secure digital future, advocating for responsible AI development that balances safety with creative applications.