Ads Dawson, a member of the OWASP team, is working on updating the OWASP list to address vulnerabilities associated with generative AI, emphasizing the significance of red teaming and cross-functional collaboration in enhancing AI security. His career journey from network apprentice to application security expert highlights his dedication to open-source contributions and continuous learning. Red teaming, a practice derived from military strategies, involves attacking one's own systems to identify weaknesses, which is crucial for defending applications, particularly when integrating generative AI. Dawson advocates for diverse red teams, comprising experts from various fields, to address the unique attack vectors in machine learning and generative AI. He suggests that companies should build internal frameworks for large language model (LLM) security, drawing from established models like threat modeling and OWASP, to identify and mitigate risks. At Cohere, deploying specific security controls based on application types, such as SaaS or on-premises, is essential, and Dawson stresses the importance of involving model developers and security engineers early in the product development process to ensure robust security measures.