Instructions to use mlx-community/Qwen3-VL-Reranker-2B-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mlx-community/Qwen3-VL-Reranker-2B-4bit with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("mlx-community/Qwen3-VL-Reranker-2B-4bit") model = AutoModelForImageTextToText.from_pretrained("mlx-community/Qwen3-VL-Reranker-2B-4bit") - MLX
How to use mlx-community/Qwen3-VL-Reranker-2B-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen3-VL-Reranker-2B-4bit mlx-community/Qwen3-VL-Reranker-2B-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
| { | |
| "size": { | |
| "longest_edge": 16777216, | |
| "shortest_edge": 65536 | |
| }, | |
| "patch_size": 16, | |
| "temporal_patch_size": 2, | |
| "image_mean": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "image_std": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "processor_class": "Qwen3VLProcessor", | |
| "image_processor_type": "Qwen2VLImageProcessorFast", | |
| "input_data_format": null, | |
| "max_pixels": 1310720, | |
| "merge_size": 2, | |
| "min_pixels": 4095, | |
| "pad_size": null, | |
| "processor_class": "Qwen3VLProcessor", | |
| "resample": 3, | |
| "rescale_factor": 0.00392156862745098, | |
| "return_tensors": null | |
| } | |