On Semantic Loss Fine-Tuning Approach for Preventing Model Collapse in Causal Reasoning
Paper • 2605.05438 • Published
Standard fine-tuned Gemma 270M-IT on d-separation task without semantic loss. Exhibits model collapse — predicts near-constant "No" (7.6% F1). Provided for reproducibility and as a negative baseline.
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("ludwigw/gemma-dseparation-baseline")
tokenizer = AutoTokenizer.from_pretrained("ludwigw/gemma-dseparation-baseline")
@article{deshmukh2026semantic,
title={On Semantic Loss Fine-Tuning Approach for Preventing Model Collapse in Causal Reasoning},
author={Deshmukh, Pratik and Gupta, Atirek},
journal={arXiv preprint arXiv:2605.05438},
year={2026}
}