Image-Text-to-Text
MLX
Safetensors
gemma4
apple-silicon
4bit
on-device
conversational
4-bit precision
Instructions to use LetheanNetwork/lemer-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use LetheanNetwork/lemer-mlx with MLX:
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("LetheanNetwork/lemer-mlx") config = load_config("LetheanNetwork/lemer-mlx") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use LetheanNetwork/lemer-mlx with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "LetheanNetwork/lemer-mlx"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "LetheanNetwork/lemer-mlx" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use LetheanNetwork/lemer-mlx with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "LetheanNetwork/lemer-mlx"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default LetheanNetwork/lemer-mlx
Run Hermes
hermes
LetheanNetwork/lemer-mlx
Gemma 4 E2B in MLX format, 4-bit quantized, converted from
LetheanNetwork/lemer's
bf16 safetensors via mlx_lm.convert. This is the unmodified Google
Gemma 4 E2B-IT weights — no LEK shift, no fine-tuning — hosted in our
namespace so downstream tools (benchmarks, apps) don't have to depend
on external mlx-community mirrors.
For the LEK-merged (consent-based ethical kernel) variant of the same
model, see lthn/lemer.
Variants in this family
| Repo | Format | Bits | Use case |
|---|---|---|---|
LetheanNetwork/lemer |
safetensors + gguf Q4_K_M | bf16 / 4 | Source weights + llama.cpp/Ollama |
LetheanNetwork/lemer-mlx |
mlx | 4 | This repo — Apple Silicon default |
LetheanNetwork/lemer-mlx-8bit |
mlx | 8 | Apple Silicon higher-precision |
LetheanNetwork/lemer-mlx-bf16 |
mlx | bf16 | Apple Silicon full-precision reference |
Usage
from mlx_lm import load, generate
model, tokenizer = load("LetheanNetwork/lemer-mlx")
response = generate(
model, tokenizer,
prompt=tokenizer.apply_chat_template(
[{"role": "user", "content": "Hello"}],
add_generation_prompt=True,
enable_thinking=True,
),
max_tokens=512,
)
Provenance
- Source:
LetheanNetwork/lemerbf16 safetensors (=google/gemma-4-E2B-it) - Converter:
mlx_lm.convert(mlx-lm — LM Studio / Apple ML Research) - Quant: 4-bit group quantization, ~4.5 bits/weight effective
- License: Apache 2.0 (Gemma Terms of Use)
License
Apache 2.0, subject to the Gemma Terms of Use.
- Downloads last month
- 8
Model size
0.7B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("LetheanNetwork/lemer-mlx") config = load_config("LetheanNetwork/lemer-mlx") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output)