Variable FlexOlmo
Collection
Pruned and distilled variants of Flex-math-2x7B-1T with variable-width expert MLPs. • 5 items • Updated
A pruned and distilled variant of allenai/Flex-math-2x7B-1T with a variable-width expert MLP. Expert 1 has been pruned from the full 11,008 intermediate size down to 5504 (50% of original width), then recovered via knowledge distillation.
| Total Parameters | 9.5B |
| Expert 1 Parameters | 2.2B |
| Expert 1 Width | 5504 (50%) |
| Base Model | allenai/Flex-math-2x7B-1T (11.6B params) |
For full details, see the blog post.
This repo includes a modeling_pruned_flex_olmo.py file that handles the variable-width expert architecture. Just load with trust_remote_code=True and it works like any other HuggingFace model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("hbfreed/flex-math-5504", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("allenai/Flex-math-2x7B-1T")
input_text = "Solve: What is 15% of 200?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
The tokenizer is the same as the base model's.
Math-calibrated importance analysis was used — 58% of the top-2048 neurons differ between math-calibrated and general-calibrated rankings.
| Model | GSM8K | MATH | Math2 |
|---|---|---|---|
| No-expert baseline (7.3B) | — | — | 8.1 |
| flex-math-5504 | 66.6 | 26.8 | 46.7 |
| Full teacher (11.6B) | 69.7 | 35.4 | 52.5 |
| Model | Total Params | Expert Width | GSM8K | MATH | Math2 |
|---|---|---|---|---|---|
| flex-math-8192 | 10.5B | 8192 (74%) | 70.1 | 31.3 | 50.7 |
| flex-math-5504 | 9.5B | 5504 (50%) | 66.6 | 26.8 | 46.7 |
| flex-math-2048 | 8.1B | 2048 (19%) | 44.3 | 13.9 | 29.1 |
Apache 2.0 (same as base model)
Base model
allenai/Flex-math-2x7B-1T