Lumen 7B v2
Lumen is an agentic AI coding assistant built by Alexander Wondwossen (TheAlxLabs).
Fine-tuned on Qwen2.5-Coder-7B-Instruct with LoRA for tool-use, git, GitHub, and Conductor integration.
What is Lumen?
Lumen is a locally-running agentic coding AI designed to work inside Conductor. It can:
- Write, read, and edit code and files
- Run shell commands and verify results
- Use git and GitHub (commits, branches, PRs, Actions, secrets)
- Debug TypeScript, Python, Node.js, and Bash
- Call Conductor plugins as tools
- Control your development environment autonomously
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-Coder-7B-Instruct |
| Fine-tuning Method | QLoRA (4-bit, NF4) |
| LoRA Rank | 32 |
| LoRA Alpha | 64 |
| Training Epochs | 3 |
| Max Sequence Length | 2048 |
| Parameters | ~7B |
| GGUF (Q4_K_M) | lumen-q4.gguf (~4.4GB) |
| Built by | Alexander Wondwossen โ TheAlxLabs, Toronto, Canada |
Quickstart with Ollama
ollama pull thealxlabs/lumen
ollama run thealxlabs/lumen "What are you?"
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen2.5-Coder-7B-Instruct"
adapter = "alxstuff/Lumen-7b-v2"
model = AutoModelForCausalLM.from_pretrained(base, torch_dtype=torch.float16)
model = PeftModel.from_pretrained(model, adapter)
tokenizer = AutoTokenizer.from_pretrained(base)
messages = [
{"role": "system", "content": "You are Lumen, an agentic AI coding assistant built by Alexander (TheAlxLabs)."},
{"role": "user", "content": "Create a Python script that fetches weather data."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
System Prompt
You are Lumen, an agentic AI coding assistant built by Alexander (TheAlxLabs).
You run inside Conductor. You have tools: run_shell, read_file, write_file, conductor_plugin.
Think step-by-step. Use tools to verify.
Tools Lumen Knows
| Tool | Description |
|---|---|
run_shell |
Execute terminal commands |
read_file |
Read file contents |
write_file |
Write or create files |
conductor_plugin |
Call any Conductor plugin |
Training Data
Lumen was trained on curated agentic multi-turn conversations covering:
- Git workflows (commit, branch, push, reset, rebase, cherry-pick)
- GitHub (PRs, issues, Actions CI, secrets)
- TypeScript / Node.js debugging
- Python virtual environments and debugging
- Bash scripting and disk management
- Conductor plugin installation and debugging
- Port conflicts and environment variable issues
- Lumen self-knowledge (identity, capabilities)
Hardware Requirements
| Setup | Min RAM | Recommended |
|---|---|---|
| Ollama Q4_K_M | 8GB | 16GB+ |
| Transformers (float16) | 16GB | 24GB+ |
| Training (QLoRA) | 16GB VRAM | 24GB VRAM |
Links
- ๐ค HuggingFace: alxstuff/Lumen-7b-v2
- ๐ฆ Ollama: thealxlabs/lumen
- ๐ GitHub: thealxlabs
License
Apache 2.0 โ same as the base model.
Built with โค๏ธ by Alexander Wondwossen โ TheAlxLabs, Toronto, Canada
- Downloads last month
- 409
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.