Update README.md
#2
by bfshi - opened
README.md
CHANGED
|
@@ -5,9 +5,9 @@ license_link: LICENSE
|
|
| 5 |
---
|
| 6 |
|
| 7 |
|
| 8 |
-
#
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
AutoGaze is a ultra light-weight model that automatically removes redundant patches in a video before passing it to any Vision Transformer (ViT) or Multi-modal Large Language Model (MLLM).
|
| 13 |
|
|
@@ -19,7 +19,7 @@ This model is for research and development only. <br>
|
|
| 19 |
|
| 20 |
### Quick Start:
|
| 21 |
|
| 22 |
-
See [our GitHub repo](https://github.com/NVlabs/AutoGaze/QUICK_START.md) for instructions on how to use AutoGaze.
|
| 23 |
|
| 24 |
### License/Terms of Use:
|
| 25 |
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
|
| 8 |
+
# AutoGaze
|
| 9 |
|
| 10 |
+
[Project Page](https://autogaze.github.io/) | [Paper](https://huggingface.co/papers/2603.12254) | [GitHub](https://github.com/NVlabs/AutoGaze) | [Models & Data & Benchmark](https://huggingface.co/collections/bfshi/autogaze) | [Demo](https://huggingface.co/spaces/bfshi/AutoGaze)
|
| 11 |
|
| 12 |
AutoGaze is a ultra light-weight model that automatically removes redundant patches in a video before passing it to any Vision Transformer (ViT) or Multi-modal Large Language Model (MLLM).
|
| 13 |
|
|
|
|
| 19 |
|
| 20 |
### Quick Start:
|
| 21 |
|
| 22 |
+
See [our GitHub repo](https://github.com/NVlabs/AutoGaze/blob/main/QUICK_START.md) for instructions on how to use AutoGaze.
|
| 23 |
|
| 24 |
### License/Terms of Use:
|
| 25 |
|