Model Card for EventModel-1.2B

EventModel is a 1.2B parameter model finetune of LFM2-1.2B using data extracted from r/parents. The idea is to come up with problems that a kid of certain age would face. This is done by using data from r/Parenting, analyzing the problem, analyzing the kid group using iterative few-shot prompting, then finetunning a generative model with the results.

It has been trained using TRL.

Quick start

from transformers import pipeline

pipe = pipeline(
    "text-generation", 
    model="mzen/EventModel-1.2B", 
    trust_remote_code=True,
    device_map="auto"
)

prompt = "### Character: 13 year old, boy\n\n### Problem:"

output = pipe(
    prompt,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    return_full_text=False 
)

print(output[0]['generated_text'])

Training procedure

This model was trained with SFT.

Framework versions

  • PEFT 0.18.1
  • TRL: 0.27.2
  • Transformers: 5.1.0
  • Pytorch: 2.10.0
  • Datasets: 4.5.0
  • Tokenizers: 0.22.2

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support