Text Ranking
Transformers
Safetensors
sentence-transformers
qwen3_vl
image-text-to-text
multimodal rerank
text rerank
Instructions to use Qwen/Qwen3-VL-Reranker-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Qwen/Qwen3-VL-Reranker-8B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-Reranker-8B") model = AutoModelForImageTextToText.from_pretrained("Qwen/Qwen3-VL-Reranker-8B") - sentence-transformers
How to use Qwen/Qwen3-VL-Reranker-8B with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("Qwen/Qwen3-VL-Reranker-8B") query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
Add pipeline tag, library name, and paper link
#6
by nielsr HF Staff - opened
Hi! I'm Niels, part of the community science team at Hugging Face.
This PR improves the model card for Qwen3-VL-Reranker-8B by:
- Adding
pipeline_tag: text-rankingto improve discoverability for retrieval and ranking tasks. - Specifying
library_name: transformersto enable the automated code snippet widget. - Adding a link to the official research paper: Qwen3-VL-Embedding and Qwen3-VL-Reranker: A Unified Framework for State-of-the-Art Multimodal Retrieval and Ranking.
These additions help users better understand the model's capabilities and how to integrate it into their workflows.