CodeK LoRA v0 -- Qwen2.5-Coder-7B-Instruct

A LoRA adapter fine-tuned on the CodeK v1 dataset: a reasoning-first, pedagogical coding dataset. Teaches decomposition, bug diagnosis, contrast reasoning, and hypothesis-driven thinking about code.

v0 Eval Results (Pass 2 ground-truth, 50 seeds)

Model Pass@1
Base (Qwen2.5-Coder-7B-Instruct) 64%
LoRA checkpoint-800 58%

6% regression on bug diagnosis. LoRA wins on 2/50 seeds (more direct, correct), base wins on 5/50 (LoRA misidentifies function or pattern-matches to training data). See BASELINE_V0.md in the dataset repo for full analysis.

Training

Setting Value
Base model Qwen/Qwen2.5-Coder-7B-Instruct
Method LoRA (RS-LoRA)
Rank / Alpha 16 / 32
Dropout 0.05
Epochs 3
Batch (effective) 8
Learning rate 2e-4
Train pairs 2,351
Best eval loss 0.0583 (step 528)
Checkpoint used checkpoint-800 (eval loss 0.061)
Hardware RunPod A100 80GB, 59 min

Dataset

mechramc/codek-v1 -- 201 seeds, 4 augmentation passes, 2,613 ShareGPT pairs.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-7B-Instruct")
model = PeftModel.from_pretrained(base, "mechramc/codek-qwen2.5-coder-7b-lora")
tokenizer = AutoTokenizer.from_pretrained("mechramc/codek-qwen2.5-coder-7b-lora")
Downloads last month
106
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mechramc/codek-qwen2.5-coder-7b-lora

Base model

Qwen/Qwen2.5-7B
Adapter
(509)
this model

Dataset used to train mechramc/codek-qwen2.5-coder-7b-lora