Protect agentic AI applications with Datadog AI Guard
Blog post from Datadog
Datadog AI Guard is a new security feature designed to enhance the protection of agentic AI applications that utilize large language models (LLMs) by providing real-time evaluation of prompts, responses, and tool calls. It aims to mitigate risks such as misuse, data exposure, and manipulation by introducing intelligent guardrails and context-aware detection mechanisms. AI Guard offers two primary protection capabilities: Prompt Protection, which assesses prompts and responses based on context and policy; and Tool Protection, which scrutinizes tool calls to ensure alignment with intended goals. These features help prevent multistep attacks and data exfiltration attempts, allowing benign actions to proceed while blocking harmful ones. The system is integrated with Datadog's existing infrastructure, requiring no additional components, and provides a comprehensive interface for monitoring AI activity and understanding security posture. AI Guard's flexible enforcement modes enable organizations to observe, tune policies, and transition to blocking unsafe actions, facilitating a secure yet innovative environment for AI development.