Home / Companies / StackHawk / Blog / Post Details
Content Deep Dive

Introducing StackHawk’s LLM Security Testing: Find LLM Risks Pre-Production

Blog post from StackHawk

Post Details
Company
Date Published
Author
Scott Gerlach
Word Count
747
Language
English
Hacker News Points
-
Summary

StackHawk has introduced new plugins within its runtime testing engine to detect five critical large language model (LLM) security risks from the OWASP LLM Top 10, including prompt injection, sensitive data disclosure, improper output handling, system prompt leakage, and unbound consumption. These risks have emerged as AI technology rapidly transforms application development, with LLM capabilities increasingly integrated into applications without traditional security reviews, creating new attack vectors. Unlike traditional application security (AppSec) tools, which are not equipped to address LLM-specific vulnerabilities, StackHawk's approach integrates into developer workflows to identify these risks early in the development process, thus teaching developers best practices for secure LLM integration. By focusing on runtime testing, StackHawk aims to address the unique security challenges posed by LLMs, ensuring that applications are protected in real-time and that security measures evolve alongside AI-driven development.