The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
AXL — Architecture eXperimental Lab
27 CPU-Optimized Code Generation Models by Koinic
All models are trained from scratch on consumer hardware (AMD Ryzen 5 5600G, 16GB RAM). No GPU required.
Models
Lion-Optimized (Recommended)
| Model | Params | PPL | GGUF (F16) | GGUF (Q4_K_M) |
|---|---|---|---|---|
| AXL-Code-1B-Lion | 318M | 1.90 | 606 MB | 188 MB |
| AXL-Reasoning-Lion | 70M | 1.03 | 134 MB | 44 MB |
| AXL-Refactor-Lion | 19.1M | 1.02 | 37 MB | 12 MB |
| AXL-TestGen-Lion | 15.2M | 1.02 | 30 MB | 18 MB |
| AXL-Chat-Lion | 9.9M | 1.03 | 19 MB | 7 MB |
| AXL-Micro-Lion | 12.8M | 1.04 | 25 MB | 15 MB |
| AXL-Secure-Lion | 11.7M | 1.03 | 23 MB | 8 MB |
| AXL-Docs-Lion | 9.9M | 1.01 | 19 MB | 7 MB |
| AXL-Comment-Lion | 7.2M | 1.02 | 14 MB | 5 MB |
SGD Models
| Model | Params | PPL | Focus |
|---|---|---|---|
| AXL-Micro-600K | 600K | 63.08 | Demo |
| AXL-Micro-8M | 12.8M | 3.13 | Code gen |
| AXL-Coder-15M | 26.0M | 5.97 | Agentic |
| AXL-Debugger-8M | 14.1M | 6.60 | Bug fixing |
| AXL-Fixer-12M | 20.9M | 5.90 | Debug |
| AXL-Reasoning-70M | 70M | 1.93 | CoT |
| AXL-300M | 322M | 5.98 | Flagship |
| AXL-Chat-10M | 9.9M | 1.02 | Dialogue |
| AXL-TestGen-15M | 15.2M | 1.01 | Test gen |
| AXL-Refactor-20M | 19.1M | 1.01 | Refactoring |
| AXL-Docs-8M | 9.9M | 1.03 | Docstrings |
| AXL-Comment-5M | 7.2M | 1.01 | Comments |
| AXL-Secure-10M | 11.7M | 1.01 | Security |
Specialized Models
| Model | Params | PPL | Focus |
|---|---|---|---|
| AXL-Code-1B | 318M | 31.22 | Code gen (SGD) |
| AXL-Chat-Pro | 12.8M | 3.42 | Advanced chat |
| AXL-Translate | 15.2M | 1.86 | Code translation |
| AXL-Vision-0.8M | 1M | — | Vision encoder |
| AXL-Vision-v2 | 4.1M | — | UI vision |
Quick Start
Python API Server (Full Quality - Recommended)
pip install -e .
python AXL/API/serve_model.py --model AXL-Micro-Lion/ --port 8880
# OpenAI-compatible endpoint:
curl http://localhost:8880/v1/completions \
-H "Content-Type: application/json" \
-d '{"prompt": "def fibonacci(n):", "max_tokens": 100}'
# Works with Continue.dev, LlamaIndex, LangChain, Cursor
With Ollama (Degraded Quality)
Warning: GGUF files for Ollama use only the fine-scale encoder (1/3 of the AXL architecture). The reported PPL values apply to the full multi-scale model. Use the Python API above for full quality.
cd AXL-Micro-Lion
ollama create axl-micro-lion -f Modelfile
ollama run axl-micro-lion "def fibonacci(n):"
With Python (Direct Inference)
import torch
from multiscale_transformer.model.config import load_config, ModelConfig
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("AXL-Micro-Lion/config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("AXL-Micro-Lion/axl_micro_lion.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
out = model.generate(ids, max_new_tokens=50, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
Architecture
AXL processes token sequences at three parallel resolution scales:
- Fine (1x): All tokens. Attention cost: O(N^2 d)
- Medium (2x): Grouped in pairs. Cost: O(N^2 d/4)
- Coarse (4x): Grouped in quadruplets. Cost: O(N^2 d/16)
Cross-scale attention connects all scale pairs. Adaptive gating fusion combines representations.
Lion optimizer: Sign-based momentum, 20x faster convergence than SGD, 50% less memory than AdamW.
Byte-level tokenizer: 258 vocab (256 bytes + BOS + EOS). No vocabulary training. Works with any programming language.
Training Cost
| Model | Time | Cost (USD) |
|---|---|---|
| AXL-Comment-Lion | 2 min | $0.0004 |
| AXL-Code-1B-Lion | 20 min | $0.004 |
| All 9 Lion models | 49 min | $0.010 |
Based on AMD Ryzen 5 5600G (100W system, $0.12/kWh).
Papers
Code
Full training code: GitHub
- Downloads last month
- 12