FiscalOxLLM โ€” Base Model

Base Llama-3.1-8B-Instruct with no fine-tuning. Used as the control/baseline for comparison.

Details

Property Value
Base Model meta-llama/Meta-Llama-3.1-8B-Instruct
Fine-Tuning Stage None (Baseline)
Method QLoRA (r=16, alpha=32)
Precision bfloat16

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("LLMOX/FiscalOxLLM-base")
tokenizer = AutoTokenizer.from_pretrained("LLMOX/FiscalOxLLM-base")

inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Part of Fiscal Ox LLM Research

This model is part of a multi-stage fine-tuning comparison study.

Downloads last month
21
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LLMOX/FiscalOxLLM-base

Finetuned
(2466)
this model
Quantizations
1 model