Ken Huang, Chief AI Officer at DistributedApps and a leading figure in tech security, discusses the challenges and strategies associated with securing large language models (LLMs) and generative AI technologies. With over two decades of experience in AI and security, Huang emphasizes the importance of adopting a systematic, principle-based approach to AI security, given the rapid pace of technological advancements and the increased attack surfaces posed by innovations like retrieval-augmented generation (RAG). He highlights the critical need for securing data, models, and applications, and points out the specific security challenges related to vector databases and API calls in RAG systems. Huang also stresses the necessity of executive leadership, particularly from CEOs, in prioritizing security to ensure the successful integration of AI in enterprises, while cautioning against sensationalist fears about AI's potential threats to humanity. His extensive involvement in industry working groups and his contributions to publications on AI and security underline his authority in the field.