Concrete Jungle: Towards Concreteness Paved Contrastive Negative Mining for Compositional Understanding
Abstract
Vision-language models face challenges in compositional reasoning due to insufficient samples for distinguishing subtle semantics, which are addressed through lexical concreteness-based negative sample selection and a novel margin-based loss function.
Vision-Language Models demonstrate remarkable capabilities but often struggle with compositional reasoning, exhibiting vulnerabilities regarding word order and attribute binding. This limitation arises from a scarcity of informative samples needed to differentiate subtle semantic variations during contrastive pretraining. Although hard negative mining offers a promising remedy, existing methods lack explicit mechanisms to dictate which linguistic elements undergo modification. Instead of engineering generative architectures, this study establishes lexical concreteness as a fundamental determinant of negative sample efficacy. Modifying highly concrete terms generates more pronounced structural and visual discrepancies, providing a substantially stronger learning signal. Leveraging this principle, ConcretePlant is proposed to systematically isolate and manipulate perceptually grounded concepts. Analyses of the InfoNCE further reveals a severe gradient imbalance, where easily distinguishable pairs disproportionately overwhelm the optimization process and restrict the bandwidth available for nuanced learning. To resolve this degradation, the Cement loss is formulated utilizing a margin-based approach. By correlating psycholinguistic scores with sample difficulty, this objective dynamically calibrates the penalization applied to individual training pairs. Comprehensive evaluations substantiate these theoretical claims. The integrated framework, designated as Slipform, achieves state-of-the-art accuracy across diverse compositional evaluation benchmarks, general cross-modal retrieval, single and multi label linear probing.
Community
This paper studies data quality factors in compositional understanding and identifies an optimization issue in contrastive learning. We propose a simple margin-based improvement to InfoNCE, and opensource both a concreteness-aware training dataset and a fine-tuned model with the new loss. It may be of interest to the community, especially who connects data quality, objective design, and open resources for better compositional understanding research.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- No Hard Negatives Required: Concept Centric Learning Leads to Compositionality without Degrading Zero-shot Capabilities of Contrastive Models (2026)
- SLQ: Bridging Modalities via Shared Latent Queries for Retrieval with Frozen MLLMs (2026)
- Inference-Time Structural Reasoning for Compositional Vision-Language Understanding (2026)
- Visual Enhanced Depth Scaling for Multimodal Latent Reasoning (2026)
- Caption First, VQA Second: Knowledge Density, Not Task Format, Drives Multimodal Scaling (2026)
- Retrieving Counterfactuals Improves Visual In-Context Learning (2026)
- RubiCap: Rubric-Guided Reinforcement Learning for Dense Image Captioning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.13313 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper