AXL-Reasoning-Lion
Chain-of-thought reasoning. 70M params, 5 layers/scale. PPL 1.03. Context 256 bytes. Part of the AXL model family by KoinicLabs.
Model Details
| Property | Value |
|---|---|
| Developed by | KoinicLabs |
| Architecture | Multi-Scale Transformer |
| Parameters | 70M |
| Optimizer | Lion |
| Attention | SDPA |
| Vocab Size | 258 (byte-level) |
| Context Window | 256 bytes |
| d_model | 512 |
| Attention Heads | 4 |
| Layers per Scale | 5 |
| Downsample Factors | [1, 2, 4] |
| License | Apache 2.0 |
Sources
- Repository: GitHub
- Organization: KoinicLabs
Uses
Direct Use
Multi-step code generation requiring reasoning.
import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_reasoning_lion.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))
Out-of-Scope Use
Not for production code generation. Not for non-code NLP tasks. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.
Bias, Risks, and Limitations
Byte-level perplexity is not comparable to BPE-level perplexity. Not suitable for production code generation. Max context 256 bytes (~80 lines of Python). IMPORTANT: GGUF files exported for Ollama/LM Studio use only the fine-scale encoder (1/3 of the AXL architecture). The reported PPL applies to the full multi-scale model. For full AXL quality, use the Python API server at http://localhost:8880/v1/completions.
Recommendations
- Use for prototyping and experimentation, not production code generation.
- Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
- For better results, use the Lion-optimized version if available.
Training Details
Training Data
Trained on 50MB real HF Python code. 205 steps in 20 min. 5-layer architecture captures long dependency chains.
Preprocessing
Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.
Speeds, Sizes, Times
| Metric | Value |
|---|---|
| Training Steps | 205 |
| Training Time | 20 min |
| Final Loss | 0.6279 |
Evaluation
Metrics
Perplexity on held-out Python code using byte-level tokenization.
Results
| Metric | Value |
|---|---|
| Perplexity (byte-level) | 1.03 |
| Final Loss | 0.6279 |
| Training Steps | 205 |
| Training Time | 20 min |
Summary: Best for multi-step code generation. Extra layers help with complex logic.
Environmental Impact
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G |
| Hours Used | 0.334 |
| Carbon Emitted | 0.0140 kg CO2 |
| Cloud Provider | None (local CPU) |
Technical Specifications
Model Architecture
Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.
Compute Infrastructure
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G (6 cores, 12 threads) |
| RAM | 16 GB |
| GPU | None (CPU-only) |
Citation
@misc{axl_2026,
title={AXL: AXL-Reasoning-Lion - Multi-Scale Transformer for CPU Code Generation},
author={Koinic},
year={2026},
url={https://huggingface.co/KoinicLabs}
}
How to Get Started
With Ollama
ollama create axl-reasoning-lion -f Modelfile
ollama run axl-reasoning-lion "def fibonacci():"
With Python
import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_reasoning_lion.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
- Downloads last month
- -
4-bit
16-bit
Datasets used to train KoinicLabs/AXL-Reasoning-Lion
Collection including KoinicLabs/AXL-Reasoning-Lion
Evaluation results
- Perplexity (byte-level)self-reported1.030