In a recent exploration of vulnerabilities within machine learning (ML) frameworks, significant security flaws were identified in ML clients and libraries that handle ostensibly safe model formats. These vulnerabilities enable malicious actors to execute arbitrary code on ML platforms by exploiting weaknesses in model loading processes. For instance, MLflow's handling of recipe.yaml files can be manipulated to execute code in JupyterLab, and H2O's model deserialization process can be exploited for code execution. Additionally, vulnerabilities in "safe" formats like PyTorch's weights_only feature and MLeap's handling of zipped TensorFlow models were uncovered, allowing for arbitrary file overwrites and potential code execution. These flaws could facilitate extensive lateral movement within organizations, as compromised ML clients and services may lead to broader security breaches. The findings underscore the importance of caution when loading ML models, even from formats perceived as safe, to prevent exploitation and ensure robust security practices in ML environments.