Instructions to use aitf-its-tim3-dfk/ckpt-ws2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use aitf-its-tim3-dfk/ckpt-ws2 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3.5-0.8B") model = PeftModel.from_pretrained(base_model, "aitf-its-tim3-dfk/ckpt-ws2") - Transformers
How to use aitf-its-tim3-dfk/ckpt-ws2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="aitf-its-tim3-dfk/ckpt-ws2")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("aitf-its-tim3-dfk/ckpt-ws2", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use aitf-its-tim3-dfk/ckpt-ws2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "aitf-its-tim3-dfk/ckpt-ws2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aitf-its-tim3-dfk/ckpt-ws2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/aitf-its-tim3-dfk/ckpt-ws2
- SGLang
How to use aitf-its-tim3-dfk/ckpt-ws2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "aitf-its-tim3-dfk/ckpt-ws2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aitf-its-tim3-dfk/ckpt-ws2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "aitf-its-tim3-dfk/ckpt-ws2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aitf-its-tim3-dfk/ckpt-ws2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Unsloth Studio new
How to use aitf-its-tim3-dfk/ckpt-ws2 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aitf-its-tim3-dfk/ckpt-ws2 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aitf-its-tim3-dfk/ckpt-ws2 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for aitf-its-tim3-dfk/ckpt-ws2 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="aitf-its-tim3-dfk/ckpt-ws2", max_seq_length=2048, ) - Docker Model Runner
How to use aitf-its-tim3-dfk/ckpt-ws2 with Docker Model Runner:
docker model run hf.co/aitf-its-tim3-dfk/ckpt-ws2
unsloth-sft-vlm-qwen35-final
LoRA adapter fine-tuned from Qwen3.5-0.8B for visual-language DFK image classification. Trained using the SITA framework https://github.com/aitf-its-tim3-dfk/SITA.
Note: This is the checkpoint from Workshop 2 (of which there have been changes since, this is not the final ckpt, we recommend loading the final ckpt once it becomes available), it's been made available to allow for trialling of integration by DFK-2.
Model Details
Model Description
This is a LoRA adapter for Qwen3.5-0.8B, fine-tuned as a Vision-Language Model (VLM) using Unsloth's SFT pipeline. The model is trained to analyze images and classify them for DFK detection tasks in Indonesian.
- Developed by: DFK Tim 3 ITS
- Model type: Vision-Language Model (VLM) — LoRA adapter
- Language(s): Indonesian
- Finetuned from: unsloth/Qwen3.5-0.8B
Model Sources
- Repository: SITA
Uses
Direct Use
Image-based content moderation classification. Given an image, the model produces a structured analysis with a classification label.
Out-of-Scope Use
This model is not intended for general-purpose vision-language tasks. It is specialized for the DFK disinformation detection pipeline.
Training Details
Training Data
Custom DFK VLM dataset (dfk_vlm_dataset_v1) with a 90/10 train/eval split, loaded from CSV (images_v2.csv).
Prompt Template
Each sample is formatted as a multi-turn conversation using qwen3.5_chatml:
<|im_start|>user
Anda adalah seorang analis konten media sosial ahli. Diberikan tangkapan layar dari sebuah unggahan media sosial, tentukan label kategori pelanggaran dan berikan analisis detail mengenai pelanggaran yang ditemukan.
Judul: {title}
Konteks: {text}
<image>
<|im_end|>
<|im_start|>assistant
Label: {label}
Analisis: {analisis}
<|im_end|>
The model is trained on responses only (train_on_responses_only: true).
Training Procedure
Trained with the SITA framework using the following config (configs/vlmconf.yaml):
Training Hyperparameters
| Parameter | Value |
|---|---|
| Training regime | fp32 (4-bit quantization disabled) |
| LoRA r | 16 |
| LoRA alpha | 16 |
| LoRA dropout | 0 |
| LoRA target modules | all-linear |
| Finetune vision layers | true |
| Finetune language layers | true |
| Finetune attention modules | true |
| Finetune MLP modules | true |
| Epochs | 5 |
| Batch size | 32 |
| Learning rate | 2e-4 |
| Gradient accumulation steps | 1 |
| Max sequence length | 2048 |
| Optimizer | AdamW 8-bit |
| Gradient checkpointing | unsloth |
| Seed | 3407 |
| Chat template | qwen3.5_chatml |
| Train on responses only | true |
Trainer
- Trainer:
unsloth_vlm_sft(Unsloth VLM SFT trainer) - Instruction part:
<|im_start|>user\n - Response part:
<|im_start|>assistant\n
Evaluation
- Evaluator:
vlm_gen - Max new tokens: 512
- Temperature: 0.0
- BERTScore model:
bert-base-multilingual-cased
Framework versions
- PEFT 0.19.0
- Downloads last month
- 14