Towards AGI: [Part 2] Multiverse of Actions
Blog post from SuperAGI
The evolution of agent architectures in artificial intelligence is characterized by the development of an Agentic Action Space, which progresses through several phases as new types of actions are integrated into agent design. Initially, agents like those described in the SayCan paper were limited to external actions, known as "Grounding," which involved direct interaction with the external world. The ReAct paper introduced "Reasoning" as an internal action, expanding the action space to include both external and internal actions. Subsequent advancements incorporated Long-Term Memory, allowing agents to perform internal actions such as "Retrieval" and "Learning," which involve interacting with memory rather than the external environment. These fundamental actions can be combined into composite actions, such as "Planning," which utilizes both reasoning and retrieval. The introduction of new actions necessitates modifications in the execution flow of agents, as seen in the transition from ReAct agents to Planner agents. Parallel actions, inspired by human behavior, enable agents to perform multiple actions simultaneously, enhancing their decision-making capabilities. MemGPT exemplifies this by supporting parallel function calling and long-term memory, offering a framework for further exploration of memory-related actions.