Video-Text-to-Text
Transformers
Safetensors
English
llava
text-generation
multimodal
vision-language
video understanding
spatial reasoning
visuospatial cognition
qwen
llava-video
Instructions to use nkkbr/ViCA-thinking with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nkkbr/ViCA-thinking with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("nkkbr/ViCA-thinking") model = AutoModelForCausalLM.from_pretrained("nkkbr/ViCA-thinking") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 41ae30463e43da8917fa52f3eef3656bb8aebb560a3417a81886cebf4dba06d8
- Size of remote file:
- 7.99 kB
- SHA256:
- d81c35c917e33c937f3f68b2a957923a9823f5f76811c9b2657cfe5a76b126ff
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.