OpenClaw Shows What Happens When AI Agents Act on Human Authority
Blog post from Lakera
OpenClaw, a software tool developed by Lakera, has gained attention not because of its enterprise nature but due to its ability to illustrate a shift in workplace dynamics, where AI tools are transitioning from assisting humans to acting autonomously on their behalf. This shift presents new security challenges as AI tools are integrated into real workflows, touching systems like inboxes and internal dashboards, and becoming part of an organization’s attack surface. The primary concern is not just the AI's outputs but the authority and permissions given to these agents, as they are capable of executing tasks across various applications without the usual visibility and control measures applied to traditional software. Recent reports have flagged security vulnerabilities in OpenClaw’s ecosystem, underscoring the need for robust AI security practices that focus on visibility, control, and protection against indirect manipulation. Lakera, now part of Check Point, aims to aid organizations in managing AI's role in workflows by providing tools for visibility, constraint of risky connections, and implementation of guardrails around sensitive operations.