IMMUNIS-Sentinel

The detection brain of IMMUNIS ACIN โ€” Adversarial Coevolutionary Immune Network.

Fine-tuned Qwen2.5-7B for multilingual cyber threat semantic fingerprinting across 40+ languages.

Training

Property Value
Base Model Qwen/Qwen2.5-7B-Instruct
Method bf16 LoRA (rank 64, alpha 128, 2.08% params)
Data 45,000 examples (15 languages, 11 attack families)
Hardware AMD Instinct MI300X (192GB HBM3) + ROCm 7.0
Training Time ~3 hours

Evaluation: Fine-tuned vs Base Model

Metric Fine-tuned Base Qwen2.5-7B Delta
Valid JSON 100% 0% +100%
Correct Attack Family 100% 0% +100%
Correct Severity 66% 0% +66%
Correct Language 58% 0% +58%
Has MITRE ATT&CK 100% 0% +100%

The base model cannot produce structured threat intelligence in our schema. Fine-tuning transforms it into a domain-specific analyst.

Part of IMMUNIS ACIN

Agent 1 (Incident Analyst) in a 12-agent adversarial coevolutionary immune network. AMD Developer Hackathon โ€” Track 1 (AI Agents) + Track 2 (Fine-Tuning on AMD GPUs).

Downloads last month
28
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LethaboMH14/immunis-sentinel

Base model

Qwen/Qwen2.5-7B
Adapter
(2057)
this model