Instructions to use mlx-community/Qwen3-VL-Reranker-2B-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mlx-community/Qwen3-VL-Reranker-2B-4bit with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("mlx-community/Qwen3-VL-Reranker-2B-4bit") model = AutoModelForImageTextToText.from_pretrained("mlx-community/Qwen3-VL-Reranker-2B-4bit") - MLX
How to use mlx-community/Qwen3-VL-Reranker-2B-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen3-VL-Reranker-2B-4bit mlx-community/Qwen3-VL-Reranker-2B-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Xet hash:
- dde3c5a4d468da6fe61f11ffa8f04af2d101f8f4a0fe118c37bd31bb8c3017b6
- Size of remote file:
- 11.4 MB
- SHA256:
- be75606093db2094d7cd20f3c2f385c212750648bd6ea4fb2bf507a6a4c55506
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.