Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report
Paper • 2504.21039 • Published • 17
How to use ree2raz/CyberSecQwen-4B-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ree2raz/CyberSecQwen-4B-GGUF", filename="cybersecqwen-4b-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
How to use ree2raz/CyberSecQwen-4B-GGUF with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
docker model run hf.co/ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
How to use ree2raz/CyberSecQwen-4B-GGUF with Ollama:
ollama run hf.co/ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
How to use ree2raz/CyberSecQwen-4B-GGUF with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ree2raz/CyberSecQwen-4B-GGUF to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ree2raz/CyberSecQwen-4B-GGUF to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ree2raz/CyberSecQwen-4B-GGUF to start chatting
How to use ree2raz/CyberSecQwen-4B-GGUF with Docker Model Runner:
docker model run hf.co/ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
How to use ree2raz/CyberSecQwen-4B-GGUF with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ree2raz/CyberSecQwen-4B-GGUF:Q4_K_M
lemonade run user.CyberSecQwen-4B-GGUF-Q4_K_M
lemonade list
GGUF Q4_K_M quantized version of CyberSecQwen-4B.
| Parameter | Value |
|---|---|
| Method | GGUF Q4_K_M (llama.cpp) |
| Weight precision | 4-bit (Q4_K_M = 4-bit block-scaled with k-quant importance) |
| Quantization tool | llama.cpp (build from master) |
| Conversion tool | convert_hf_to_gguf.py |
| Quantization hardware | Modal A10G |
| File | cybersecqwen-4b-Q4_K_M.gguf (2.5 GB) |
Evaluated under the Foundation-Sec-8B protocol:
| Task | GGUF Q4_K_M | AWQ 4-bit | FP16 Reference |
|---|---|---|---|
| CTI-MCQ (2,500 items) | 0.5368 ± 0.0048 | 0.5921 ± 0.0083 | 0.5868 ± 0.0029 |
| CTI-RCM (1,000 items) | 0.6254 ± 0.0063 | 0.5814 ± 0.0025 | 0.6664 ± 0.0023 |
Key findings:
| Trial | Seed | Accuracy |
|---|---|---|
| 1 | 42 | 0.5420 |
| 2 | 43 | 0.5280 |
| 3 | 44 | 0.5360 |
| 4 | 45 | 0.5392 |
| 5 | 46 | 0.5388 |
| Trial | Seed | Accuracy |
|---|---|---|
| 1 | 42 | 0.6270 |
| 2 | 43 | 0.6300 |
| 3 | 44 | 0.6270 |
| 4 | 45 | 0.6300 |
| 5 | 46 | 0.6130 |
| Variant | CTI-MCQ | CTI-RCM | Size | Engine |
|---|---|---|---|---|
| AWQ 4-bit | 0.5921 | 0.5814 | 2.7 GB | vLLM |
| GGUF Q4_K_M | 0.5368 | 0.6254 | 2.5 GB | llama.cpp |
Choose GGUF for vulnerability classification, AWQ for MCQ/general chat.
# Download
wget https://huggingface.co/ree2raz/CyberSecQwen-4B-GGUF/resolve/main/cybersecqwen-4b-Q4_K_M.gguf
# Serve
./llama-server -m cybersecqwen-4b-Q4_K_M.gguf --host 0.0.0.0 --port 8080 -ngl 99 -c 4096
| Format | Size |
|---|---|
| Original FP16 | ~8 GB |
| GGUF Q4_K_M | ~2.5 GB |
@misc{{cybersecqwen2026,
title = {{CyberSecQwen-4B: A Compact CTI Specialist}},
author = {{Mulia, Samuel}},
year = {{2026}},
url = {{https://huggingface.co/athena129/CyberSecQwen-4B}}
}}
GitHub repository — Modal scripts for quantization + evaluation.
4-bit
Base model
Qwen/Qwen3-4B-Instruct-2507