securereview-7b-mlx-4bit
A 4-bit MLX fine-tune of Qwen2.5-Coder-7B-Instruct for function-level security code review. Input: a code function. Output: structured JSON findings with severity, category, CWE, line number, and description. Runs on Apple Silicon, ~8 GB memory.
Trained on 13,484 examples across 9 languages from CVEFixes, synthetic generation, real vulnerable applications, and community rule sets. All training data is permissively licensed.
Benchmarks
Tested against 33 vulnerable functions from 8 deliberately vulnerable applications (DVNA, NodeGoat, pygoat, crAPI, DSVW, WebGoat, RailsGoat, Juice Shop):
| Metric | Base Qwen | securereview-7b |
|---|---|---|
| Vulnapp recall | -- | 94% (31/33) |
| FPR (clean code) | 70% | <3% |
| F1 (test split) | 14% | 44% |
Detection by category:
| Category | Recall |
|---|---|
| SQL Injection | 100% |
| Command Injection | 100% |
| SSRF | 100% |
| Path Traversal | 100% |
| Broken Access Control | 100% |
| IDOR | 86% |
| Insecure Deserialization | 100% |
| Broken Authentication | 100% |
Quick start
from mlx_lm import load, generate
model, tok = load("vitorallo/securereview-7b-mlx-4bit")
if hasattr(tok, "eos_token_ids") and 151645 not in tok.eos_token_ids:
tok.eos_token_ids.add(151645)
The model expects a structured prompt with Function, File, Role,
Auth, Code fields and a JSON format reminder. See
docs/m3_inference_contract.md
for the full prompt specification.
Training
- Base: Qwen2.5-Coder-7B-Instruct-4bit
- Method: QLoRA, rank 8, 8 layers, 1 epoch, lr 1e-4
- Data: 13,484 records, 9 languages, multi-rule prompts (2-8 rules per function)
- Hardware: Apple Silicon, ~1 hour
Links
- Code + pipeline
- License: Apache-2.0
Citation
@misc{securereview-7b-2026,
author = {Vito Rallo},
title = {securereview-7b: a 7B fine-tune for structured security code review},
year = {2026},
url = {https://huggingface.co/vitorallo/securereview-7b-mlx-4bit}
}
- Downloads last month
- 349
4-bit
Model tree for vitorallo/securereview-7b-mlx-4bit
Base model
Qwen/Qwen2.5-7B