Large Language Models (LLMs) have redefined machine interaction by making conversations feel more intuitive and intelligent, powering everything from simple chat interfaces to complex AI agents. The effectiveness of these models lies not only in their parameters but also in how interactions are structured, particularly through role-based formatting. Basic roles like system, user, and assistant are fundamental for everyday use, guiding the model's behavior and maintaining context. In more advanced agent-based systems, additional roles such as tool_use, tool_result, and planner help organize reasoning and decision-making processes, enabling the model to perform tasks beyond simple text generation. These roles ensure context is preserved, behavior is controlled, and tasks are executed clearly, enhancing the model's ability to handle complex tasks and workflows. The integration of roles with memory, tools, and planning mechanisms is crucial for developing effective agents, as demonstrated by frameworks like Google's Agent Development Kit (ADK), which streamline the construction and management of sophisticated LLM applications.