MultiVerse: A Multi-Turn Conversation Benchmark for Evaluating Large Vision and Language Models Paper • 2510.16641 • Published Oct 18 • 4
RefineBench: Evaluating Refinement Capability of Language Models via Checklists Paper • 2511.22173 • Published 19 days ago • 12
RefineBench: Evaluating Refinement Capability of Language Models via Checklists Paper • 2511.22173 • Published 19 days ago • 12
RefineBench: Evaluating Refinement Capability of Language Models via Checklists Paper • 2511.22173 • Published 19 days ago • 12
MultiVerse: A Multi-Turn Conversation Benchmark for Evaluating Large Vision and Language Models Paper • 2510.16641 • Published Oct 18 • 4
Unified Reinforcement and Imitation Learning for Vision-Language Models Paper • 2510.19307 • Published Oct 22 • 29
GenRecal: Generation after Recalibration from Large to Small Vision-Language Models Paper • 2506.15681 • Published Jun 18 • 39
Are Vision-Language Models Truly Understanding Multi-vision Sensor? Paper • 2412.20750 • Published Dec 30, 2024 • 20
Datasheets Aren't Enough: DataRubrics for Automated Quality Metrics and Accountability Paper • 2506.01789 • Published Jun 2 • 14
FREESON: Retriever-Free Retrieval-Augmented Reasoning via Corpus-Traversing MCTS Paper • 2505.16409 • Published May 22 • 2
The CoT Encyclopedia: Analyzing, Predicting, and Controlling how a Reasoning Model will Think Paper • 2505.10185 • Published May 15 • 26
Scaling Evaluation-time Compute with Reasoning Models as Process Evaluators Paper • 2503.19877 • Published Mar 25 • 1
LLM-as-an-Interviewer: Beyond Static Testing Through Dynamic LLM Evaluation Paper • 2412.10424 • Published Dec 10, 2024 • 2