Advancing Large Language Model Reasoning Techniques: Methods Enabling LLMs to 'Think' Beyond Text Generation for Reliable and Explainable AI
Abstract
Large Language Models (LLMs) have revolutionized artificial intelligence applications, extending from writing assistants to Retrieval-Augmented Generation (RAG) systems. However, understanding how LLMs "reason" process complex queries and generate reliable results beyond mere text generation, it has become a pivotal research focus. This paper surveys the core reasoning techniques that empower LLMs to simulate logical thinking: Chain-of-Thought (CoT), Self-Consistency, ReAct (Reason + Act), and Plan-and-Solve Reasoning. We discuss architectural innovations, learning paradigms, and evaluation benchmarks that support these techniques, highlighting their significance in advancing trustworthy AI. Challenges such as hallucination, robustness, and interpretability are examined, providing directions for future research to enhance LLM reasoning capabilities.