GRaPE_Logo

The General Reasoning Agent (for) Project Exploration

The GRaPE Family

Attribute Size Modalities Domain
GRaPE Flash 7B A1B Text in, Text out High-Speed Applications
GRaPE Mini (Instruct) 3B Text + Image + Video in, Text out On-Device Deployment
GRaPE Nano 700M Text in, Text out Extreme Edge Deployment

Capabilities

The GRaPE Family was trained on about 14 billion tokens of data after pre-training. About half was code related tasks, with the rest being heavy on STEAM. Ensuring the model has a sound logical basis.


GRaPE Flash and Nano are monomodal models, only accepting text. GRaPE Mini being trained most recently supports image and video inputs.

How to Run

I recommend using LM Studio for running GRaPE Models, and have generally found these sampling parameters to work best:

Name Value
Temperature 0.6
Top K Sampling 40
Repeat Penalty 1
Top P Sampling 0.85
Min P Sampling 0.05

Uses of GRaPE Mini Right Now

GRaPE Mini was foundational to the existence of Andy-4.1, a model trained to play Minecraft. This was a demo proving the efficiency and power this architecture can make.

GRaPE Mini as a Model

GRaPE Mini Instruct is a version of GRaPE Mini that was not trained on any data regarding reasoning tasks. It was the foundation which allowed for the unique architecture shown in GRaPE Mini to truly be expressed.

GRaPE Mini Instruct exists also as a way for lower compute devices to run GRaPE Models.

Architecture

  • GRaPE Flash: Built on the OlMoE Architecture, allowing for incredibly fast speeds where it matters. Allows for retaining factual information, but lacks in logical tasks.

  • GRaPE Mini: Built on the Qwen3 VL Architecture, allowing for edge case deployments, where logic cannot be sacrificed.

  • GRaPE Nano: Built on the LFM 2 Architecture, allowing for the fastest speed, and the most knowledge in the tiniest package.


Notes

The GRaPE Family started all the way back in August of 2025, meaning these models are severely out of date on architecture, and training data.

GRaPE 2 will come sooner than the GRaPE 1 family had, and will show multiple improvements.

There are no benchmarks for GRaPE 1 Models due to the costly nature of running them, as well as prioritization of newer models.

Updates for GRaPE 2 models will be posted here on Huggingface, as well as Skinnertopia

Demos for select GRaPE Models can be found here: https://github.com/Sweaterdog/GRaPE-Demos

Downloads last month
26
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SL-AI/GRaPE-Mini-Instruct

Quantizations
2 models

Collection including SL-AI/GRaPE-Mini-Instruct