vqwen-qformer-stage1 (feature-alignment foundation)

Stage-1 checkpoint. Only the Linear projector has been trained. The model can emit plain BLIP-style captions for images, but has not been instruction-tuned and is not intended as a deployable chat model — it is the pretraining foundation for downstream stage-2 fine-tunes.

If you want a ready-to-use TikTok sludge classifier built on top of this foundation, see alpharomercoma/vqwen-qformer-tiktok.

Loads as a stock Blip2ForConditionalGeneration — no trust_remote_code.

Architecture

  • Vision tower: Salesforce/blip2-opt-2.7b ViT-G/14-224 — frozen
  • Q-Former: Salesforce/blip2-opt-2.7b 32 pretrained query tokens — frozen
  • Linear projector: 768 → 2560 — trained (this is the only delta)
  • LLM: Qwen/Qwen3-4B — frozen

Trainable parameter count: ~2 M (one Linear(768, 2560)). Everything else is loaded unchanged from its base checkpoint.

Training recipe

Dataset: liuhaotian/LLaVA-Pretrain — 558 K BLIP-captioned image–text pairs from LAION/CC/SBU. Conversation format is plain (no chat template, no system prompt): <image> on the human turn, caption on the assistant turn, loss masked on the human side.

Hyperparameters (single NVIDIA H200, bf16, SDPA, Liger-Kernel):

Global batch size 256 (per_device=128 × grad_accum=2)
Learning rate 1e-4, cosine, warmup ratio 0.03
Weight decay 0.05
Optimizer fused AdamW
Epochs 1 (2181 steps)
Max sequence length 2048
Precision bf16

Loss curve: ~7.5 → ~3.58 over 2181 steps (typical MiniGPT-4/LLaVA stage-1 trajectory).

What this model can do

  • Emit short captions for images when prompted in the training-time "plain" format.
  • Serve as a starting point for stage-2 instruction tuning (LoRA on the LLM
    • continued projector training) — the frozen CLIP/Q-Former features are already aligned to Qwen3's embedding space.

What this model is NOT

  • Not an instruction-following model. It has never seen chat-formatted supervision, so it will not reliably follow "Is this X? Answer yes or no."-style prompts. Expect caption-like free-form output.
  • Not specialized for any domain — this is the generic alignment checkpoint, pre-any task fine-tuning.

Quick start

import torch
from PIL import Image
from transformers import Blip2ForConditionalGeneration, AutoProcessor

model_id = "alpharomercoma/vqwen-qformer-stage1"
model = Blip2ForConditionalGeneration.from_pretrained(
    model_id, dtype=torch.bfloat16, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)

image = Image.open("my_image.jpg").convert("RGB")
messages = [{
    "role": "user",
    "content": [
        {"type": "image"},
        {"type": "text", "text": "Describe this image."},
    ],
}]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
out = model.generate(**inputs, max_new_tokens=64, do_sample=False)
print(processor.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Using this as a stage-2 starting point

The projector state lives at model.language_projection (a Linear(768, 2560)). To start a stage-2 run from these weights, load this checkpoint as your base Blip2ForConditionalGeneration, attach a LoRA adapter to the language model, and continue training on your instruction dataset. The vision tower and Q-Former can (and should) stay frozen.

See the training code at github.com/alpharomercoma/vqwen-qformer — the TikTok specialization pipeline (scripts 11–30) is a worked example.

Credits

  • Base vision: Salesforce/blip2-opt-2.7b (ViT-G + Q-Former)
  • Base LLM: Qwen/Qwen3-4B
  • Dataset: liuhaotian/LLaVA-Pretrain
  • Recipe: follows MiniGPT-4 / LLaVA-1.5 stage-1 alignment with a single Linear (768→2560) projector instead of an MLP, because the Q-Former is already a trained adapter.

License

Apache 2.0 for the trained projector. Base models retain original licenses: Salesforce/blip2-opt-2.7b (BSD-3), Qwen/Qwen3-4B (Apache 2.0).

Downloads last month
5
Safetensors
Model size
5B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alpharomercoma/vqwen-qformer-pretrain

Finetuned
Qwen/Qwen3-4B
Finetuned
(655)
this model

Dataset used to train alpharomercoma/vqwen-qformer-pretrain

Papers for alpharomercoma/vqwen-qformer-pretrain