Transformers documentation

ColQwen2

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.0.0rc2).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2024-06-27 and added to Hugging Face Transformers on 2025-06-02.

PyTorch

ColQwen2

ColQwen2 is a variant of the ColPali model designed to retrieve documents by analyzing their visual features. Unlike traditional systems that rely heavily on text extraction and OCR, ColQwen2 treats each page as an image. It uses the Qwen2-VL backbone to capture not only text, but also the layout, tables, charts, and other visual elements to create detailed multi-vector embeddings that can be used for retrieval by computing pairwise late interaction similarity scores. This offers a more comprehensive understanding of documents and enables more efficient and accurate retrieval.

This model was contributed by @tonywu71 (ILLUIN Technology) and @yonigozlan (HuggingFace).

You can find all the original ColPali checkpoints under Vidore’s Hf-native ColVision Models collection.

Click on the ColQwen2 models in the right sidebar for more examples of how to use ColQwen2 for image retrieval.

image retrieval
import requests
import torch
from PIL import Image

from transformers import ColQwen2ForRetrieval, ColQwen2Processor
from transformers.utils.import_utils import is_flash_attn_2_available


# Load the model and the processor
model_name = "vidore/colqwen2-v1.0-hf"

model = ColQwen2ForRetrieval.from_pretrained(
    model_name,
    dtype=torch.bfloat16,
    device_map="auto",  # "cpu", "cuda", "xpu" or "mps" for Apple Silicon
    attn_implementation="flash_attention_2" if is_flash_attn_2_available() else "sdpa",
)
processor = ColQwen2Processor.from_pretrained(model_name)

# The document page screenshots from your corpus
url1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"
url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"

images = [
    Image.open(requests.get(url1, stream=True).raw),
    Image.open(requests.get(url2, stream=True).raw),
]

# The queries you want to retrieve documents for
queries = [
    "When was the United States Declaration of Independence proclaimed?",
    "Who printed the edition of Romeo and Juliet?",
]

# Process the inputs
inputs_images = processor(images=images).to(model.device)
inputs_text = processor(text=queries).to(model.device)

# Forward pass
with torch.no_grad():
    image_embeddings = model(**inputs_images).embeddings
    query_embeddings = model(**inputs_text).embeddings

# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)

print("Retrieval scores (query x image):")
print(scores)

If you have issue with loading the images with PIL, you can use the following code to create dummy images:

images = [
    Image.new("RGB", (128, 128), color="white"),
    Image.new("RGB", (64, 32), color="black"),
]

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes to quantize the weights to int4.

import requests
import torch
from PIL import Image

from transformers import BitsAndBytesConfig, ColQwen2ForRetrieval, ColQwen2Processor
from accelerate import Accelerator

model_name = "vidore/colqwen2-v1.0-hf"
device = Accelerator().device

# 4-bit quantization configuration
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,
)

model = ColQwen2ForRetrieval.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map=device,
).eval()

processor = ColQwen2Processor.from_pretrained(model_name)

url1 = "https://upload.wikimedia.org/wikipedia/commons/8/89/US-original-Declaration-1776.jpg"
url2 = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Romeoandjuliet1597.jpg/500px-Romeoandjuliet1597.jpg"

images = [
    Image.open(requests.get(url1, stream=True).raw),
    Image.open(requests.get(url2, stream=True).raw),
]

queries = [
    "When was the United States Declaration of Independence proclaimed?",
    "Who printed the edition of Romeo and Juliet?",
]

# Process the inputs
inputs_images = processor(images=images, return_tensors="pt").to(model.device)
inputs_text = processor(text=queries, return_tensors="pt").to(model.device)

# Forward pass
with torch.no_grad():
    image_embeddings = model(**inputs_images).embeddings
    query_embeddings = model(**inputs_text).embeddings

# Score the queries against the images
scores = processor.score_retrieval(query_embeddings, image_embeddings)

print("Retrieval scores (query x image):")
print(scores)

You can also use checkpoints for ColQwen2.5 that are compatible with the ColQwen2 architecture. This version of the model uses Qwen2_5_VL as the backbone.

import torch
from transformers import ColQwen2ForRetrieval, ColQwen2Processor
from transformers.utils.import_utils import is_flash_attn_2_available

model_name = "Sahil-Kabir/colqwen2.5-v0.2-hf" # An existing compatible checkpoint

model = ColQwen2ForRetrieval.from_pretrained(
    model_name,
    dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="flash_attention_2" if is_flash_attn_2_available() else "sdpa"
)
processor = ColQwen2Processor.from_pretrained(model_name)

Notes

  • score_retrieval() returns a 2D tensor where the first dimension is the number of queries and the second dimension is the number of images. A higher score indicates more similarity between the query and image.
  • Unlike ColPali, ColQwen2 supports arbitrary image resolutions and aspect ratios, which means images are not resized into fixed-size squares. This preserves more of the original input signal.
  • Larger input images generate longer multi-vector embeddings, allowing users to adjust image resolution to balance performance and memory usage.

ColQwen2Config

class transformers.ColQwen2Config

< >

( vlm_config = None embedding_dim: int = 128 initializer_range: float = 0.02 **kwargs )

Parameters

  • vlm_config (PreTrainedConfig, optional) — Configuration of the VLM backbone model.
  • embedding_dim (int, optional, defaults to 128) — Dimension of the multi-vector embeddings produced by the model.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

Configuration class to store the configuration of a ColQ2en2ForRetrieval. It is used to instantiate an instance of ColQwen2ForRetrieval according to the specified arguments, defining the model architecture following the methodology from the “ColPali: Efficient Document Retrieval with Vision Language Models” paper.

Instantiating a configuration with the defaults will yield a similar configuration to the vision encoder used by the pre-trained ColQwen2-v1.0 model, e.g. vidore/colqwen2-v1.0-hf.

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Example:

from transformers.models.colqwen2 import ColQwen2Config, ColQwen2ForRetrieval

config = ColQwen2Config()
model = ColQwen2ForRetrieval(config)

ColQwen2Processor

class transformers.ColQwen2Processor

< >

( image_processor = None tokenizer = None chat_template = None visual_prompt_prefix: str | None = None query_prefix: str | None = None **kwargs )

Parameters

  • image_processor (Qwen2VLImageProcessorFast) — The image processor is a required input.
  • tokenizer (Qwen2Tokenizer) — The tokenizer is a required input.
  • chat_template (str) — A Jinja template to convert lists of messages in a chat into a tokenizable string.
  • visual_prompt_prefix (str, optional, defaults to "<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe the image.<|im_end|><|endoftext|>") — A string that gets tokenized and prepended to the image tokens.
  • query_prefix (str, optional, defaults to "Query -- "): A prefix to be used for the query.

Constructs a ColQwen2Processor which wraps a image processor and a tokenizer into a single processor.

ColQwen2Processor offers all the functionalities of Qwen2VLImageProcessorFast and Qwen2Tokenizer. See the ~Qwen2VLImageProcessorFast and ~Qwen2Tokenizer for more information.

__call__

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor'], NoneType] = None text: str | list[str] | list[list[str]] = None **kwargs: typing_extensions.Unpack[transformers.models.colqwen2.processing_colqwen2.ColQwen2ProcessorKwargs] ) BatchFeature

Parameters

  • images (Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list, list, list], optional) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • text (Union[str, list, list], optional) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If you pass a pretokenized input, set is_split_into_words=True to avoid ambiguity with batched inputs.
  • return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are:

    • 'pt': Return PyTorch torch.Tensor objects.
    • 'np': Return NumPy np.ndarray objects.

Returns

BatchFeature

A BatchFeature with the following fields:

  • input_ids — List of token ids to be fed to a model.
  • attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names and if text is not None).
  • pixel_values — Pixel values to be fed to a model. Returned when images is not None.

ColQwen2ForRetrieval

class transformers.ColQwen2ForRetrieval

< >

( config: ColQwen2Config )

Parameters

  • config (ColQwen2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Following the ColPali approach, ColQwen2 leverages VLMs to construct efficient multi-vector embeddings directly from document images (“screenshots”) for document retrieval. The model is trained to maximize the similarity between these document embeddings and the corresponding query embeddings, using the late interaction method introduced in ColBERT.

Using ColQwen2 removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, …) of a document.

ColQwen2 is part of the ColVision model family, which was introduced with ColPali in the following paper: ColPali: Efficient Document Retrieval with Vision Language Models.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( input_ids: torch.LongTensor | None = None attention_mask: torch.Tensor | None = None position_ids: torch.LongTensor | None = None past_key_values: transformers.cache_utils.Cache | None = None labels: torch.LongTensor | None = None inputs_embeds: torch.FloatTensor | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None pixel_values: torch.Tensor | None = None image_grid_thw: torch.LongTensor | None = None cache_position: torch.LongTensor | None = None **kwargs ) transformers.models.colqwen2.modeling_colqwen2.ColQwen2ForRetrievalOutput or tuple(torch.FloatTensor)

Parameters

  • input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default.

    Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.

    What are input IDs?

  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:

    • 1 for tokens that are not masked,
    • 0 for tokens that are masked.

    What are attention masks?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1].

    What are position IDs?

  • past_key_values (~cache_utils.Cache, optional) — Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used to speed up sequential decoding. This typically consists in the past_key_values returned by the model at a previous stage of decoding, when use_cache=True or config.use_cache=True.

    Only Cache instance is allowed as input, see our kv cache guide. If no past_key_values are passed, DynamicCache will be initialized by default.

    The model will output the same cache format that is fed as input.

    If past_key_values are used, the user is expected to input only unprocessed input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, unprocessed_length) instead of all input_ids of shape (batch_size, sequence_length).

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size].
  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.
  • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
  • pixel_values (torch.Tensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using Qwen2VLImageProcessorFast. See Qwen2VLImageProcessorFast.call() for details (ColQwen2Processor uses Qwen2VLImageProcessorFast for processing images).
  • image_grid_thw (torch.LongTensor of shape (num_images, 3), optional) — The temporal, height and width of feature shape of each image in LLM.
  • cache_position (torch.LongTensor of shape (sequence_length), optional) — Indices depicting the position of the input sequence tokens in the sequence. Contrarily to position_ids, this tensor is not affected by padding. It is used to update the cache in the correct position and to infer the complete sequence length.

Returns

transformers.models.colqwen2.modeling_colqwen2.ColQwen2ForRetrievalOutput or tuple(torch.FloatTensor)

A transformers.models.colqwen2.modeling_colqwen2.ColQwen2ForRetrievalOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ColQwen2Config) and inputs.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction).

  • embeddings (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — The embeddings of the model.

  • past_key_values (Cache, optional, returned when use_cache=True is passed or when config.use_cache=True) — It is a Cache instance. For more details, see our kv cache guide.

    Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding.

  • hidden_states (tuple[torch.FloatTensor] | None.hidden_states, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple[torch.FloatTensor] | None.attentions, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

The ColQwen2ForRetrieval forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example:

Update on GitHub