Supertron1
Collection
1 item • Updated
Supertron1-4B is an instruction-tuned language model built on top of Qwen3-4B. Designed to be a reliable, efficient daily driver, it delivers strong performance across math, coding, reasoning, and general conversation while remaining fast and lightweight enough to run on consumer hardware.
Supertron1-4B holds its own against models in the 4–8B class and surpasses Mistral 7B on all four core benchmarks despite having nearly half the parameters.
Key takeaways:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "surpem/supertron1-4b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Explain the difference between LoRA and full fine-tuning."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
@misc{surpem2026supertron1,
title={Supertron1-4B — Efficient Instruction-Tuned Language Model},
author={Surpem},
year={2026},
url={https://huggingface.co/surpem/supertron1-4b},
}