Home / Companies / Snyk / Blog / Post Details
Content Deep Dive

Snyk Code now secures AI builds with support for LLM sources

Blog post from Snyk

Post Details
Company
Date Published
Author
Liqian Lim (林利蒨), Ranko Cupovic
Word Count
571
Language
English
Hacker News Points
-
Summary

Snyk has updated its Code feature to protect against security risks associated with using Large Language Models (LLMs) in software development. The update extends vulnerability-scanning capabilities to detect issues with LLM libraries, including those from OpenAI, HuggingFace, Anthropic, and Google. Snyk Code now performs a taint analysis, detecting untrusted data and generating alerts for potential security issues. This move demonstrates Snyk's commitment to making AI safe and trustworthy, as it secures both AI-generated and human-created code, as well as third-party LLM code issues at the source-code level. The update aims to enable developers to confidently build AI capabilities into their applications without compromising security.