root
commited on
Commit
·
83b1e26
1
Parent(s):
df9c699
1218
Browse files
README.md
CHANGED
|
@@ -10,8 +10,8 @@ configs:
|
|
| 10 |
path: VKnowU.json
|
| 11 |
---
|
| 12 |
|
| 13 |
-
# VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs <a href="https://arxiv.org/abs/2511.20272">ArXiv</a>
|
| 14 |
-

|
| 15 |
|
| 16 |
While Multimodal Large Language Models (MLLMs) have become adept at recognizing objects, they often lack the intuitive, human-like understanding of the world's underlying physical and social principles. This high-level vision-grounded semantics, which we term visual knowledge, forms a bridge between perception and reasoning, yet remains an underexplored area in current MLLMs.
|
| 17 |
|
|
|
|
| 38 |
If you find this work useful for your research, please consider citing VKnowU. Your acknowledgement would greatly help us in continuing to contribute resources to the research community.
|
| 39 |
|
| 40 |
```
|
| 41 |
+
@article{jiang2025vknowu,
|
| 42 |
+
title={VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs},
|
| 43 |
+
author={Jiang, Tianxiang and Xia, Sheng and Xu, Yicheng and Wu, Linquan and Zeng, Xiangyu and Wang, Limin and Qiao, Yu and Wang, Yi},
|
| 44 |
+
journal={arXiv preprint arXiv:2511.20272},
|
| 45 |
+
year={2025}
|
| 46 |
+
}
|
| 47 |
```
|