Qwen2.5-3B Linux Assistant

A fine-tuned version of Qwen2.5-3B-Instruct trained to act as a Linux/Shell command assistant. Given a natural language description, the model outputs the correct shell command.

Supports both Russian and English input.


Model Details

Property Value
Base model Qwen2.5-3B-Instruct
Fine-tuning method QLoRA (LoRA r=16, alpha=16)
Training steps ~1700
Epochs 3
Final loss ~0.28
Dataset size ~4500 examples
Languages Russian, English
Framework Unsloth + TRL

Usage

Ollama (recommended)

ollama run hf.co/NickIBrody/qwen-linux-gguf

llama.cpp

llama-cli -hf NickIBrody/qwen-linux-gguf --jinja

Python (transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "NickIBrody/qwen-linux"
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

messages = [
    {"role": "system", "content": "You are a Linux assistant. Reply only with the shell command, no explanations."},
    {"role": "user", "content": "show top 5 processes by memory usage"},
]
inp = tok.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
out = model.generate(inp, max_new_tokens=128, temperature=0.3)
print(tok.decode(out[0][inp.shape[1]:], skip_special_tokens=True))

Examples

Input Output
показажи топ 5 процессов по памяти ps aux --sort=-%mem | head -n 5
где я нахожусь в терминале pwd
compress file data.txt with bzip2 bzip2 data.txt
show disk usage in human readable format df -h
find all .log files modified in last 7 days find / -name "*.log" -mtime -7
kill process by name nginx pkill nginx
show open ports ss -tulnp

Dataset

Training data: NickIBrody/linux-commands-ru-en

~4500 shell command examples in Russian and English, covering:

  • File system navigation and management
  • Process management
  • Networking
  • Archive and compression
  • System monitoring
  • Package management

Training Code

from unsloth import FastLanguageModel
from unsloth.chat_templates import get_chat_template
from datasets import load_dataset
from trl import SFTTrainer
from transformers import TrainingArguments

model, tok = FastLanguageModel.from_pretrained(
    "unsloth/Qwen2.5-3B-Instruct-bnb-4bit",
    max_seq_length=2048,
    load_in_4bit=True
)

model = FastLanguageModel.get_peft_model(
    model, r=16, lora_alpha=16,
    target_modules=["q_proj","k_proj","v_proj","o_proj","gate_proj","up_proj","down_proj"]
)

tok = get_chat_template(tok, chat_template="qwen-2.5")

ds = load_dataset("NickIBrody/linux-commands-ru-en", split="train")
ds = ds.map(lambda x: {"text": tok.apply_chat_template(x["messages"], tokenize=False)})

SFTTrainer(
    model=model,
    tokenizer=tok,
    train_dataset=ds,
    dataset_text_field="text",
    max_seq_length=2048,
    args=TrainingArguments(
        per_device_train_batch_size=2,
        gradient_accumulation_steps=4,
        num_train_epochs=3,
        learning_rate=2e-4,
        fp16=True,
        logging_steps=10,
        output_dir="out",
        optim="adamw_8bit"
    )
).train()

model.save_pretrained_gguf("qwen-linux", tok, quantization_method="q4_k_m")

Limitations

  • Designed for shell commands only, not general conversation
  • May struggle with highly complex multi-step scripts
  • Best results with clear, specific prompts

License

Apache 2.0

Downloads last month
735
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NickIBrody/qwen-linux

Base model

Qwen/Qwen2.5-3B
Finetuned
(1192)
this model
Quantizations
1 model