RDP LoRA: Geometry-Driven Identification for Parameter-Efficient Adaptation in Large Language Models
Abstract
Using geometric trajectory analysis with the Ramer-Douglas-Peucker algorithm to select optimal layers for parameter-efficient fine-tuning of large language models, achieving better performance than full or random layer selection.
Fine-tuning Large Language Models (LLMs) remains structurally uncertain despite parameter-efficient methods such as Low-Rank Adaptation (LoRA), as the layer-specific roles of internal representations are poorly understood, leading to heuristic decisions about where adaptation should be applied. We model the evolution of hidden states as a high-dimensional geometric trajectory and propose using the Ramer-Douglas-Peucker (RDP) algorithm, a parameter-free and training-free polygon simplification method that preserves global structural transitions while eliminating locally redundant changes, to identify critical breakpoints along the representation path. Crucially, we use these geometric pivots not merely for analysis, but as a direct decision signal for determining which layers should be adapted during parameter-efficient fine-tuning. By integrating this geometry-aware layer selection strategy into LoRA fine-tuning of Qwen3-8B-Base, we achieve superior performance on MMLU-Math using only 13 RDP-selected layers (81.67%), significantly outperforming both full 36-layer adaptation (79.32%) and random 13-layer selection (75.56%), as well as the baseline Qwen3-8B-Base model (74.25%). These results demonstrate that leveraging the intrinsic geometry of representation trajectories provides a robust, interpretable, and training-free signal for optimizing layer selection during model adaptation.
Community
This work presents a compelling and principled approach to layer selection in parameter-efficient fine-tuning. By modeling hidden state evolution as a geometric trajectory and leveraging the Ramer-Douglas-Peucker algorithm, the authors introduce a novel, training-free mechanism for identifying structurally significant transition points across layers.
The integration of this geometry-aware signal into Low-Rank Adaptation is particularly noteworthy, as it addresses a well-known limitation of LoRA—namely, the reliance on heuristic or uniform layer selection. The reported results, where a subset of RDP-selected layers outperforms both full-layer adaptation and random selection, provide strong empirical support for the hypothesis that layer-wise contributions to adaptation are highly non-uniform and can be systematically characterized.
From a research perspective, this work contributes to a growing body of literature seeking to improve the interpretability and efficiency of fine-tuning strategies by grounding them in the intrinsic structure of learned representations.
A natural direction for future investigation would be to assess the robustness and transferability of the selected layers across tasks, domains, and model scales, as well as to better understand the theoretical properties linking trajectory geometry to functional adaptation capacity.
Overall, this is a well-motivated and methodologically elegant contribution with meaningful implications for scalable and interpretable LLM adaptation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning (2026)
- A Layer-wise Analysis of Supervised Fine-Tuning (2026)
- Not All Directions Matter: Toward Structured and Task-Aware Low-Rank Adaptation (2026)
- Expert Pyramid Tuning: Efficient Parameter Fine-Tuning for Expertise-Driven Task Allocation (2026)
- CoMoL: Efficient Mixture of LoRA Experts via Dynamic Core Space Merging (2026)
- ID-LoRA: Efficient Low-Rank Adaptation Inspired by Matrix Interpolative Decomposition (2026)
- Parameter Importance is Not Static: Evolving Parameter Isolation for Supervised Fine-Tuning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.19321 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper