The Tree-of-Thought Prompting (ToT) framework is an approach designed to enable large language models (LLMs) to engage in decision-making through a tree-like process, allowing them to self-evaluate and correct mistakes. Unlike Chain-of-Thought prompting, ToT does not rely on Zero-Shot-prompting or manually designed demonstrations, instead leveraging search algorithms to create a tree-like structure for each thought represented by a token sequence. This approach has demonstrated significant improvement in LLMs' abilities to solve complex tasks, such as math games, creative writing tasks, and mini crosswords, with success rates of up to 74% compared to just 4% with Chain-of-Thought prompting. ToT has great potential for a wide range of tasks requiring mathematical, symbolic, commonsense, and knowledge reasoning, including supply chain optimization and similar processes that can help reduce costs, identify bottlenecks, and analyze expedient routes. Current research on ToT is ongoing, with studies exploring different paths to implementing this approach, including using reinforcement learning and custom coding.