SkeptiSTEM-4B-v2 (stageR1) - LoRA Adapter
This is the LoRA adapter for SkeptiSTEM-4B-v2, fine-tuned from unsloth/Qwen3-4B-Base.
Stage: R1 STEM SFT
Trained on a mixture of:
- GSM8K (math word problems)
- Hendrycks MATH (advanced mathematics)
- DAPO Math
- SciBench (science)
- MBPP (Python coding)
- Verifiable Coding Problems
Total examples: ~40,922
Training Details
- LoRA rank: 64
- Learning rate: 2e-05
- Epochs: 3
- Effective batch size: 32
Usage
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="HallD/SkeptiSTEM-4B-v2-stageR1-lora",
max_seq_length=4096,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
# Generate
messages = [
{"role": "system", "content": "You are a helpful STEM assistant."},
{"role": "user", "content": "What is 15 * 23?"},
]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Trained with Unsloth.
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support