FiscalOxLLM โ Base Model
Base Llama-3.1-8B-Instruct with no fine-tuning. Used as the control/baseline for comparison.
Details
| Property | Value |
|---|---|
| Base Model | meta-llama/Meta-Llama-3.1-8B-Instruct |
| Fine-Tuning Stage | None (Baseline) |
| Method | QLoRA (r=16, alpha=32) |
| Precision | bfloat16 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("LLMOX/FiscalOxLLM-base")
tokenizer = AutoTokenizer.from_pretrained("LLMOX/FiscalOxLLM-base")
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Part of Fiscal Ox LLM Research
This model is part of a multi-stage fine-tuning comparison study.
- Downloads last month
- 21