Safetensors
GGUF
English
qwen3_5_text
cisco
ios-xr
networking
service-provider
bgp
mpls
segment-routing
evpn
conversational
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)IOS-XR Expert - Fine-tuned Qwen3.5-9B
A specialized language model for Cisco IOS-XR service provider networking, fine-tuned from Qwen3.5-9B.
Model Details
- Base Model: Qwen/Qwen3.5-9B (8.95B parameters)
- Fine-tuning: LoRA r=64, bf16, all linear layers (116M trainable params)
- Training: 5 epochs, A100 80GB, 3 hours
- Dataset: 1,190 curated IOS-XR QA pairs across 8 task families
Performance (V1)
| Metric | Score |
|---|---|
| Syntax accuracy | 92.3% |
| Semantic correctness | 95.6% |
| Contamination resistance | 89.8% |
| Operational quality | 23.8% |
| Overall | 83.0% |
Capabilities
- IOS-XR configuration generation (BGP, MPLS, SR, EVPN, IS-IS, OSPF, L3VPN, L2VPN)
- IOS/IOS-XE to IOS-XR migration
- Configuration error detection and correction
- Troubleshooting guidance
- Route-policy (RPL) authoring
- CLI to YANG mapping
Usage with Ollama
ollama create iosxr-expert -f Modelfile
ollama run iosxr-expert "Configure BGP VPNv4 peering on IOS-XR"
GGUF Quantizations
iosxr-qwen3.5-9b-q8_0.gguf(8.9 GB) - Best qualityiosxr-qwen3.5-9b-q4_k_m.gguf(5.3 GB) - Best size/quality ratio
- Downloads last month
- 123
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ramixpe/Iosxr-expert", filename="", )