Feature Extraction
Transformers
Safetensors
MLX
qwen3_vl
image-text-to-text
multimodal embedding
qwen
embedding
Instructions to use mlx-community/Qwen3-VL-Embedding-2B-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mlx-community/Qwen3-VL-Embedding-2B-4bit with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="mlx-community/Qwen3-VL-Embedding-2B-4bit")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("mlx-community/Qwen3-VL-Embedding-2B-4bit") model = AutoModelForImageTextToText.from_pretrained("mlx-community/Qwen3-VL-Embedding-2B-4bit") - MLX
How to use mlx-community/Qwen3-VL-Embedding-2B-4bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen3-VL-Embedding-2B-4bit mlx-community/Qwen3-VL-Embedding-2B-4bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
mlx-community/Qwen3-VL-Embedding-2B-4bit
The Model mlx-community/Qwen3-VL-Embedding-2B-4bit was converted to MLX format from Qwen/Qwen3-VL-Embedding-2B using mlx-lm version 0.1.0.
Use with mlx
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/Qwen3-VL-Embedding-2B-4bit")
# For image-text embeddings
images = [
"./images/cats.jpg", # cats
]
texts = ["a photo of cats", "a photo of a desktop setup", "a photo of a person"]
# Process all image-text pairs
outputs = generate(model, processor, texts, images=images)
logits_per_image = outputs.logits_per_image
probs = mx.sigmoid(logits_per_image) # probabilities for this image
for i, image in enumerate(images):
print(f"Image {i+1}:")
for j, text in enumerate(texts):
print(f" {probs[i][j]:.1%} match with '{text}'")
print()
- Downloads last month
- 244
Model size
0.7B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for mlx-community/Qwen3-VL-Embedding-2B-4bit
Base model
Qwen/Qwen3-VL-2B-Instruct