Gemma D-Separation Semantic V2

Fine-tuned Gemma 270M-IT for d-separation causal reasoning using semantic loss with dynamic lambda scheduling.

Performance

  • Standard accuracy: 68.6%
  • Adversarial accuracy: 67.8%
  • F1 score: 25.0% (vs 7.6% collapsed baseline)

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("ludwigw/gemma-dseparation-semantic-v2")
tokenizer = AutoTokenizer.from_pretrained("ludwigw/gemma-dseparation-semantic-v2")

Citation

@article{deshmukh2026semantic,
  title={On Semantic Loss Fine-Tuning Approach for Preventing Model Collapse in Causal Reasoning},
  author={Deshmukh, Pratik and Gupta, Atirek},
  journal={arXiv preprint arXiv:2605.05438},
  year={2026}
}
Downloads last month
15
Safetensors
Model size
0.3B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ludwigw/gemma-dseparation-semantic-v2

Quantized
(90)
this model

Paper for ludwigw/gemma-dseparation-semantic-v2