How to use from the
Use from the
MLX library
# Make sure mlx-vlm is installed
# pip install --upgrade mlx-vlm

from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config

# Load the model
model, processor = load("LetheanNetwork/lemrd-mlx-8bit")
config = load_config("LetheanNetwork/lemrd-mlx-8bit")

# Prepare input
image = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
prompt = "Describe this image."

# Apply chat template
formatted_prompt = apply_chat_template(
    processor, config, prompt, num_images=1
)

# Generate output
output = generate(model, processor, formatted_prompt, image)
print(output)

LetheanNetwork/lemrd-mlx-8bit

Gemma 4 in MLX format, 8-bit quantized, converted from LetheanNetwork/lemrd's bf16 safetensors via mlx_lm.convert. Higher-precision sibling of LetheanNetwork/lemrd-mlx (4-bit). For the LEK-merged variant see lthn/lemrd.

License

Apache 2.0, subject to the Gemma Terms of Use.

Downloads last month
4
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LetheanNetwork/lemrd-mlx-8bit

Quantized
(2)
this model