Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency
Paper
•
2504.18589
•
Published
•
13
image
imagewidth (px) 20
5.51k
| label
class label 2
classes |
|---|---|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
VCBench provides a standardized framework for evaluating vision-language models. This document outlines the procedures for both standard evaluation and GPT-assisted evaluation of your model's outputs.
Models must produce outputs in JSONL format with the following structure:
{"id": <int>, "pred_answer": "<answer_letter>"}
{"id": <int>, "pred_answer": "<answer_letter>"}
...
Example File (submit.jsonl):
{"id": 1, "pred_answer": "A"}
{"id": 2, "pred_answer": "B"}
{"id": 3, "pred_answer": "C"}
python evaluate_vcbench.py -p ./path/to/predictions.jsonl -g ./path/to/VCBench_with_answer.json
VCBench_with_answer.json is the ground truth file which can be downloaded from here.
For natural language responses, use this JSONL format:
{"id": <int>, "pred_answer": "<natural_language_response>"}
{"id": <int>, "pred_answer": "<natural_language_response>"}
...
Example File (nl_predictions.jsonl):
{"id": 1, "pred_answer": "The correct answer is A"}
{"id": 2, "pred_answer": "After careful analysis, option B appears correct"}
{"id": 3, "pred_answer": "C is the right choice"}
Set your Dashscope API key:
export DASHSCOPE_KEY="your_api_key_here"
python evaluate_vcbench_by_gpt.py -p ./path/to/nl_predictions.jsonl -g ./path/to/VCBench_with_answer.json
Both evaluation scripts will provide:
BibTeX:
@misc{wong2025vcbench
author = {Zhikai Wang and Jiashuo Sun and Wenqi Zhang and Zhiqiang Hu and Xin Li and Fan Wang and Deli Zhao},
title = {Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency},
year = {2025},
eprint = {2504.18589},
archivePrefix = {arxiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2504.18589}
}