π΅οΈ Agatha Christie Style LoRA β Gemma-2-2B-IT
This model is a LoRA fine-tuned version of Googleβs Gemma-2-2B-IT, trained to generate prose in the classic style of Agatha Christie.
It captures the tone, pacing, domestic atmosphere, and subtle psychological tension characteristic of Christieβs detective fiction.
The model is intended for story continuation, creative writing, and mystery scene generation.
It is a causal LM, not an instruction-tuned chat model.
β¨ Model Details
- Model type: Causal language model (LoRA adapter)
- Base model:
google/gemma-2-2b-it - Fine-tuning method: LoRA (PEFT)
- Training framework: Hugging Face AutoTrain
- Languages: English
- Primary domain: Fiction, mystery, detective prose
- Task: Text generation (story continuation)
π― Intended Use
This model is suitable for:
- Generating text in the Agatha Christie style
- Continuing mystery scenes
- Creative writing assistance
- Prototyping detective stories
- Experimenting with stylistic transfer
Not recommended for:
- Real-world Q&A
- Safety-critical applications
- Factual reasoning
- Chat-assistant behavior without custom prompting
π Training Data
The model was trained on:
realdanielbyrne/AgathaChristieText
A plain-text dataset containing ~13,000 rows of Agatha Christie prose.
No instruction template was used; the model was trained purely on unstructured text to learn style rather than task formats.
π§ Training Configuration
- Epochs: 3
- Block size: 1024
- Learning rate: 3e-5
- Batch size: AutoTrain default
- Optimizer: AdamW
- Chat template: None (causal LM)
- LoRA target modules: Linear projection layers of Gemma
Training was performed using AutoTrain, which handled preprocessing, batching, and evaluation.
π§ Model Behavior
The model tends to produce:
- restrained, elegant British prose
- domestic settings and quiet tension
- subtle descriptions of character behavior
- slow, methodical narrative pacing
- attention to objects, rooms, and small clues
Example characteristics:
- No supernatural or horror tone unless explicitly prompted
- Minimal modern slang
- High coherence for 1β3 paragraphs of continuation
- Strong stylistic adherence to Christieβs sentence rhythm
β οΈ Limitations and Biases
- May repeat narrative structures due to small dataset size
- Not optimized for long-range story consistency
- Inherits biases present in Agatha Christieβs historical works
- Not suited for factual or logical reasoning
- Can generate outdated cultural norms reflective of 20th-century British literature
π Evaluation
This model was not evaluated using standard NLP benchmarks, as its purpose is stylistic generation rather than accuracy on downstream tasks.
Qualitative evaluation shows:
- Strong stylistic similarity to Christieβs prose
- High coherence in short continuations
- Appropriate tone, atmosphere, and pacing
π License
This LoRA inherits licensing constraints from:
- the Google Gemma-2 gated model license
- the dataset used
- Hugging Face AutoTrainβs output terms
Users must be approved to access the Gemma model family.
π Acknowledgements
- Base model by Google DeepMind
- Fine-tuned using Hugging Face AutoTrain
- Dataset contributed by the open-source community
- PEFT LoRA framework by Hugging Face
π£ Citation
If you use this model, please cite the base model and the dataset authors.
You may also reference this repository.