Model Card for CodeLlama-7B-Instruct-Luau
Fine-tuned version of codellama/CodeLlama-7b-Instruct-hf targeted toward the Luau programming language, Roblox’s Lua-derived scripting language.
This model is distributed as a LoRA adapter and is intended to improve the base model’s performance on Roblox-specific scripting tasks.
Model Details
Model Description
This model is a parameter-efficient fine-tuning (LoRA) of CodeLlama 7B Instruct, specialized for generating, explaining, and refactoring Luau code.
The fine-tuning focuses on Roblox development patterns, including common services, APIs, gameplay scripting idioms, and client/server logic. The model is designed to assist developers during prototyping, learning, and general scripting workflows.
- Developed by: darwinkernelpanic
- Funded by: Not applicable
- Shared by: darwinkernelpanic
- Model type: Causal Language Model (decoder-only, LoRA adapter)
- Language(s) (NLP): English
- License: Apache-2.0
- Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
Model Sources
- Repository: https://huggingface.co/darwinkernelpanic/CodeLlama-7b-Instruct-hf-luau
- Paper: Code Llama: Large Language Models for Code (Meta AI)
- Demo: Not available
Uses
Direct Use
This model can be used directly for:
- Writing Luau scripts for Roblox
- Explaining Roblox APIs and services
- Refactoring or debugging Luau code
- Prototyping gameplay systems and utilities
- Learning Luau and Roblox scripting concepts
The model is intended as a developer assistant, not an autonomous system.
Downstream Use
Potential downstream uses include:
- Further fine-tuning on proprietary Roblox frameworks
- Integration into IDEs or editor tooling
- Chat-based assistants for Roblox development
- Educational or documentation tooling
Out-of-Scope Use
This model should not be used for:
- Safety-critical or production-critical systems
- Legal, medical, or financial advice
- Malware, exploit, or cheat development
- Fully automated code deployment without review
Bias, Risks, and Limitations
- Inherits biases and limitations from the base CodeLlama model
- May hallucinate Roblox APIs or outdated behaviors
- Does not validate code at runtime
- Output correctness depends on prompt quality
Recommendations
Users should:
- Review all generated code manually
- Test scripts in Roblox Studio
- Cross-check with official Roblox documentation
- Treat outputs as suggestions rather than authoritative solutions
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "codellama/CodeLlama-7b-Instruct-hf"
adapter_model = "darwinkernelpanic/CodeLlama-7b-Instruct-hf-luau"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "Write a Luau function that creates a Part and parents it to Workspace."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.7,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Training Data
The model was fine-tuned on a curated mixture of:
- Luau scripts
- Roblox API usage examples
- Open-source Roblox projects
- Synthetic instruction-style prompts
All data was filtered to avoid private, proprietary, or sensitive content.
Training Procedure
The model was trained using parameter-efficient fine-tuning with LoRA while keeping the base model weights frozen.
Preprocessing
- Code formatting normalization
- Instruction-style prompt structuring
- Removal of low-quality or irrelevant samples
Training Hyperparameters
- Training regime: fp16 mixed precision
Speeds, Sizes, Times
- Base model size: ~7B parameters
- Trainable parameters: <1% (LoRA adapters only)
- Adapter checkpoint size: ~100–200 MB
Evaluation
Testing Data, Factors & Metrics
Testing Data
- Hand-written Luau prompts
- Roblox-specific scripting scenarios
Factors
- Luau syntax correctness
- Roblox API familiarity
- Instruction-following behavior
Metrics
- Qualitative human evaluation
- Manual code review and comparison with base model
Results
The LoRA adapter demonstrates improved performance over the base model in:
- Generating idiomatic Luau
- Correct Roblox service usage
- Following game-development-oriented instructions
Summary
The model performs best when used as a Roblox development assistant and is not intended for general-purpose natural language tasks.
Model Examination
No formal interpretability or probing analysis was conducted.
Environmental Impact
Carbon emissions were not formally measured.
- Hardware Type: Consumer-grade GPU
- Hours used: < 24 hours
- Cloud Provider: None (local training)
- Compute Region: Not applicable
- Carbon Emitted: Not estimated
Technical Specifications
Model Architecture and Objective
- Decoder-only Transformer
- Next-token prediction objective
- LoRA adapters applied to attention layers
Compute Infrastructure
Hardware
- Single consumer-grade GPU
Software
- PyTorch
- Transformers
- PEFT
Citation
BibTeX:
@misc{darwinkernelpanic2025luau,
title={CodeLlama 7B Instruct Luau LoRA},
author={darwinkernelpanic},
year={2025},
howpublished={Hugging Face},
note={LoRA fine-tuned for Luau / Roblox scripting}
}
APA:
darwinkernelpanic. (2025). CodeLlama 7B Instruct Luau LoRA. Hugging Face.
Model Card Authors
darwinkernelpanic
Model Card Contact
Use the Hugging Face repository issues or the author’s profile.
Framework versions
- PEFT 0.18.0
- Downloads last month
- 15
Model tree for darwinkernelpanic/CodeLlama-7b-Instruct-hf-luau
Base model
codellama/CodeLlama-7b-Instruct-hf