LlammyBlend-Enhanced

Qwen Coder meets Blender - Optimized for Python scripting and automation

Fine-tuned on Blender workflows with emphasis on code generation and technical precision.


🎯 Model Overview

LlammyBlend-Enhanced is built on Qwen2.5-Coder-3B, optimized for Blender Python scripting and technical workflows. Best for:

  • Python script generation (bpy automation)
  • Technical problem-solving
  • Procedural workflows
  • Geometry nodes logic
  • Blender API usage

Created by: Eternal Path Media (永恒之路) Base Model: Qwen2.5-Coder-3B-Instruct-4bit Training Method: LoRA fine-tuning with MLX Format: GGUF Q5_K_M (2.2 GB)


📊 Training Details

  • Dataset: 2,759 Blender-specific prompt/response pairs
  • Source: Production data from Llammy Blender addon
  • Training Iterations: 1,000
  • LoRA Rank: 16 layers
  • Final Training Loss: 0.240
  • Final Validation Loss: 0.240
  • Framework: MLX (Apple Silicon optimized)

🚀 Quick Start

Using with Ollama

ollama create bartendr604/llammyblend-enhanced -f Modelfile
ollama run bartendr604/llammyblend-enhanced "Generate a script to export all objects as FBX"

💡 Example Usage

Geometry Nodes Setup

Prompt:

How do I set up geometry nodes for scattering instances on a surface?

Response:

To set up Geometry Nodes for scattering instances on a surface, follow these steps:

1. Open Blender and create or open your scene.
2. In the Object tab, select an object that you want to use as the instance...
3. Go to the Modifiers tab and enable the Object Info modifier for your object.
4. In the Object Data Properties panel, go to the Modifier Stack section and
   add the Geometry Nodes modifier.
5. In the Geometry Nodes editor, navigate to the Surface > Surface Inputs node group.
6. Connect the Object Info output to the Point Density socket of a Noise texture node.
7. Use this noise texture as the surface input for your shader (e.g., Principled BSDF)...

🎨 Use Cases

  • Script generation - Python automation for Blender
  • Technical workflows - Precise, step-by-step instructions
  • API exploration - bpy module usage examples
  • Geometry nodes - Procedural setup guides

📈 Performance

  • Tokens per second: ~100-110 (on Apple M-series chips)
  • Context length: 8,192 tokens
  • Model size: 2.2 GB (Q5_K_M quantization)
  • Memory usage: ~4-5 GB during inference

📄 License

MIT License


永恒之路 (Eternal Path Media) - Code-focused Blender AI 🌟

Downloads last month
30
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bartendr604/llammyblend-enhanced

Quantized
(35)
this model

Collection including bartendr604/llammyblend-enhanced