JFrog's security research team is actively monitoring open-source software registries to identify and address potential vulnerabilities, particularly in the field of Machine Learning (ML), which often shows higher vulnerability rates compared to more established software categories. Their research has uncovered multiple security vulnerabilities in ML-related projects, including critical CVEs in tools like mlflow, WANDB Weave, ZenML Cloud, Deep Lake, Vanna.AI, and Mage AI. These vulnerabilities, which range from server-side issues like directory traversal and command injection to improper access controls, can allow attackers to escalate privileges, execute arbitrary code, or exfiltrate sensitive data. The exploitation of such vulnerabilities can lead to significant security breaches, including the hijacking of ML model registries and pipelines, potentially enabling attackers to introduce backdoors or perform data poisoning. JFrog's findings emphasize the importance of addressing these vulnerabilities to maintain the integrity and security of ML systems, with further exploration of client-side vulnerabilities to follow in an upcoming blog series.