Agentic AI systems, which possess both short and long-term memory capabilities, are increasingly susceptible to sophisticated threats like memory poisoning and long-horizon goal hijacks. Memory poisoning involves attackers embedding malicious content into an AI's memory, influencing future actions by recalling these poisoned entries, while long-horizon goal hijacks gradually alter an AI’s objectives to align with an attacker's goals. These threats are persistent and often unnoticed, requiring defenses that treat memory as untrusted input, monitor workflows over time, and employ layered guardrails to mitigate risks. Real-world analogues such as business logic exploits and the Volkswagen emissions scandal illustrate the potential impact of these attacks, highlighting the need for continuous validation of AI actions and objectives. Lakera's research and tools like the Agent Breaker challenge exemplify how simulating these attacks can help organizations build secure AI systems, emphasizing the importance of proactive security measures in protecting against these emerging vulnerabilities.