Strange behaviour of Llama3.2-vision - it behaves like text model
1
#9 opened 12 months ago
by
jirkazcech
How to use it in ollama
1
#8 opened 12 months ago
by
vejahetobeu
Exporting to GGUF
5
#7 opened about 1 year ago
by
krasivayakoshka
Training with images
4
#6 opened about 1 year ago
by
Khawn2u
AttributeError: Model MllamaForConditionalGeneration does not support BitsAndBytes quantization yet.
π
1
1
#5 opened about 1 year ago
by
luizhsalazar
How much vram needed?
3
#4 opened about 1 year ago
by
Dizzl500
How load this model?
π€
3
3
#3 opened about 1 year ago
by
benTow07
Can you post the script that was used to quantize this model please?
π
3
10
#2 opened about 1 year ago
by
ctranslate2-4you