The paper by Jie Huang and Kevin Chen-Chuan Chang explores the current understanding of reasoning within Large Language Models (LLMs), highlighting techniques to improve these models' reasoning capabilities and methods to assess their performance. While LLMs have shown potential in natural language processing and exhibit reasoning abilities when scaled, it is still uncertain to what extent these models truly reason. The paper discusses various techniques such as Fully Supervised Finetuning, Prompting & In-Context Learning, and Chain of Thought, alongside suggestions for future research like hybrid methods and reasoning-enhanced training. The study emphasizes the importance of reasoning as a fundamental human trait crucial for problem-solving and decision-making, and it suggests that advancing LLMs' reasoning abilities could significantly enhance their application in complex tasks.