Graph-Based Chain-of-Thought Pruning for Reducing Redundant Reflections in Reasoning LLMs
Abstract
A graph-based framework optimizes chain-of-thought reasoning in large language models by identifying and eliminating redundant thinking patterns through structured pruning and reinforcement learning techniques.
Extending CoT through RL has been widely used to enhance the reasoning capabilities of LLMs. However, due to the sparsity of reward signals, it can also induce undesirable thinking patterns such as overthinking, i.e., generating redundant intermediate reasoning content. In this work, we argue that a major source of such redundancy is inefficient reflection, which often manifests in two problematic patterns: Indiscriminate Reflection, where the model performs broad, low-impact checks throughout reasoning, and Repetitive Reflection, where it repeatedly re-verifies an already established conclusion. To address this, we introduce a graph-based CoT optimization framework. Specifically, we convert each linear CoT into a directed acyclic graph (DAG) with explicit dependency edges, and design a dual pruning strategy: branch-level pruning removes weakly contributing reflection branches, while depth-level pruning eliminates late-stage re-verification. We distill this behavior via a three-stage pipeline: (1) SFT to initialize the policy on pruned concise traces, (2) DPO to prefer correct but less redundant trajectories, and (3) GRPO with length penalty to jointly optimize answer correctness and efficiency. Experiments show that our approach reduces the average reasoning tokens by 42\% while maintaining or improving accuracy.
Community
RL-Enhanced Chain-of-Thought (CoT) often suffers from redundancy and "overthinking" due to sparse rewards. We identify two main culprits: Indiscriminate and Repetitive Reflection. To fix this, we introduce a graph-based optimization framework that converts linear CoT into a Directed Acyclic Graph (DAG).
Using a dual pruning strategy (branch-level and depth-level), we eliminate low-impact and late-stage redundant reasoning. This behavior is distilled through a three-stage pipeline: SFT for initialization, DPO for conciseness preference, and GRPO with length penalties. Results show a 42% reduction in reasoning tokens without sacrificing accuracy.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ETR: Entropy Trend Reward for Efficient Chain-of-Thought Reasoning (2026)
- Bridging Efficiency and Transparency: Explainable CoT Compression in Multimodal Large Reasoning Models (2026)
- Draft-Thinking: Learning Efficient Reasoning in Long Chain-of-Thought LLMs (2026)
- WebClipper: Efficient Evolution of Web Agents with Graph-based Trajectory Pruning (2026)
- TRiMS: Real-Time Tracking of Minimal Sufficient Length for Efficient Reasoning via RL (2026)
- PACE: Prefix-Protected and Difficulty-Aware Compression for Efficient Reasoning (2026)
- Long Chain-of-Thought Compression via Fine-Grained Group Policy Optimization (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.05643 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper