AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition
Abstract
AdaptVision, a vision-language model, dynamically adjusts visual token usage through a reinforcement learning framework to balance accuracy and efficiency in visual question answering tasks.
Vision-Language Models (VLMs) have achieved remarkable success in visual question answering tasks, but their reliance on large numbers of visual tokens introduces significant computational overhead. While existing efficient VLM approaches reduce visual tokens through fixed-ratio compression, they operate passively and lack the ability to adapt to varying task requirements. This motivates a fundamental question: Can VLMs autonomously determine the minimum number of visual tokens required for each sample? Inspired by human active vision mechanisms, we introduce AdaptVision, an efficient VLM paradigm that enables adaptive visual token acquisition through a coarse-to-fine approach. Our model initially processes compressed visual tokens from low-resolution images and selectively acquires additional visual information by invoking a bounding box tool to crop key regions when necessary. We train AdaptVision using a reinforcement learning framework that carefully balances accuracy and efficiency. Central to our approach is Decoupled Turn Policy Optimization (DTPO), which decouples the learning objective into two components: (1) tool learning, which optimizes correct tool utilization, and (2) accuracy improvement, which refines the generated responses to improve answer correctness. Based on this formulation, we further decouple advantage estimation by computing separate advantages for tokens associated with each objective. This formulation enables more effective optimization for AdaptVision compared to vanilla GRPO. Comprehensive experiments across multiple VQA benchmarks demonstrate that AdaptVision achieves superior performance while consuming substantially fewer visual tokens than state-of-the-art efficient VLM methods.
Community
AdaptVision is an open-source model that leverages agentic visual tool use for dynamic visual token reduction, achieving a sota-level accuracy-efficiency trade-off across multiple VQA benchmarks.
Code: https://github.com/AdaptVision/AdaptVision
Model: https://huggingface.co/AdaptVision/AdaptVision-7B
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception (2025)
- Parallel Vision Token Scheduling for Fast and Accurate Multimodal LMMs Inference (2025)
- ALDEN: Reinforcement Learning for Active Navigation and Evidence Gathering in Long Documents (2025)
- VisionSelector: End-to-End Learnable Visual Token Compression for Efficient Multimodal LLMs (2025)
- Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens (2025)
- VisRAG 2.0: Evidence-Guided Multi-Image Reasoning in Visual Retrieval-Augmented Generation (2025)
- Think Twice to See More: Iterative Visual Reasoning in Medical VLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
