Model Card: DeepSeek-Cybersec-33B-Instruct
DeepSeek-Cybersec-33B is a high-performance, specialized forensic engine designed for autonomous malware intent analysis and deep code audit. This model was fine-tuned during the lablab.ai hackathon to bridge the gap between general-purpose LLMs and the specific needs of SOC (Security Operations Center) analysts.
🛡️ Model Details
The model is a domain-specific fine-tune of DeepSeek-Coder-33B-Instruct, optimized to identify malicious patterns, de-obfuscate scripts, and determine the underlying intent of suspicious code.
- Developed by: TeanShow
- Model type: Fine-tuned Large Language Model (Causal LM)
- Language(s) (NLP): English, and multiple programming languages (Python, Java, C++, JavaScript, Shell)
- License: MIT
- Finetuned from model: deepseek-ai/deepseek-coder-33b-instruct
🏗️ Model Sources
- Repository: GitHub: Deepseek-cybersec-33b-fine_tuned
- Infrastructure: Powered by AMD Instinct™ MI300X
🚀 Uses
Direct Use
- Automated Forensics: Analyzing suspicious files for malicious intent.
- De-obfuscation: Unpacking and explaining complex or packed code payloads.
- Threat Assessment: Providing structured verdicts: CLEAN, SUSPICIOUS, or MALICIOUS.
Out-of-Scope Use
- Any form of offensive operations, development of malicious software, or illegal activities. This model is strictly intended for defensive research and incident response.
📊 Training Details
Training Procedure
The model was trained using LoRA (Low-Rank Adaptation) to maintain the base model's reasoning capabilities while injecting deep cybersecurity domain knowledge.
- Hardware: AMD Instinct™ MI300X Accelerators (192GB HBM3 VRAM).
- Software: ROCm™ open software stack.
- Optimization: Fine-tuned via Axolotl for high-throughput efficiency.
Results
Training logs demonstrate a consistent reduction in loss, confirming successful learning of malicious logic patterns. The model has already gained community traction with 13+ downloads prior to the official project submission.
💻 How to Get Started
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base_model_path = "deepseek-ai/deepseek-coder-33b-instruct"
adapter_path = "TeanShow/deepseek-cybersec-33b-instruct"
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(model, adapter_path)
- Downloads last month
- 47
Model tree for TeanShow/Deepseek-cybersec-33b-instruct
Base model
deepseek-ai/deepseek-coder-33b-instruct