The article explores the development of a mental model for AI agents that utilize large language models (LLMs) to drive application processes. It highlights the iterative process of learning and experimentation involved in understanding how AI agents function, emphasizing the importance of structuring these agents with goals, tools, and event loops. The piece outlines the components necessary for building AI agents, such as bootstrapping LLMs with goals, preparing data for tool invocation, and updating LLM input with the latest context. The author stresses the significance of prompt engineering and flexibility in programming languages for AI development, underscoring the need for resilience and durability in agentic systems. The article also touches on the challenges of ensuring reliable operations in distributed, cloud-native environments and suggests that this mental model can aid developers in designing robust AI agents.