Datasets:

Modalities:
Text
Video
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
root commited on
Commit
83b1e26
·
1 Parent(s): df9c699
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -10,8 +10,8 @@ configs:
10
  path: VKnowU.json
11
  ---
12
 
13
- # VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs <a href="https://arxiv.org/abs/2511.20272">ArXiv</a>
14
- ![Overview of ExpVid](figs/VKnowU.png)
15
 
16
  While Multimodal Large Language Models (MLLMs) have become adept at recognizing objects, they often lack the intuitive, human-like understanding of the world's underlying physical and social principles. This high-level vision-grounded semantics, which we term visual knowledge, forms a bridge between perception and reasoning, yet remains an underexplored area in current MLLMs.
17
 
@@ -38,5 +38,10 @@ To systematically evaluate this capability, we present [📊VKnowU](https://hugg
38
  If you find this work useful for your research, please consider citing VKnowU. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
39
 
40
  ```
41
- coming soon
 
 
 
 
 
42
  ```
 
10
  path: VKnowU.json
11
  ---
12
 
13
+ # VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs <a href="https://arxiv.org/abs/2511.20272"> 📖ArXiv</a>
14
+ ![📖](figs/VKnowU.png)
15
 
16
  While Multimodal Large Language Models (MLLMs) have become adept at recognizing objects, they often lack the intuitive, human-like understanding of the world's underlying physical and social principles. This high-level vision-grounded semantics, which we term visual knowledge, forms a bridge between perception and reasoning, yet remains an underexplored area in current MLLMs.
17
 
 
38
  If you find this work useful for your research, please consider citing VKnowU. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
39
 
40
  ```
41
+ @article{jiang2025vknowu,
42
+ title={VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs},
43
+ author={Jiang, Tianxiang and Xia, Sheng and Xu, Yicheng and Wu, Linquan and Zeng, Xiangyu and Wang, Limin and Qiao, Yu and Wang, Yi},
44
+ journal={arXiv preprint arXiv:2511.20272},
45
+ year={2025}
46
+ }
47
  ```