BreakFree DialoGPT LoRA Adapter
This repository contains LoRA adapter weights fine-tuned on rehabilitation and mental health support dialogues. It is intended to be used with the base model microsoft/DialoGPT-medium.
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_id = "microsoft/DialoGPT-medium"
adapter_id = "cyril21/breakfree-dialo-lora"
tokenizer = AutoTokenizer.from_pretrained(base_id)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
base_model = AutoModelForCausalLM.from_pretrained(base_id)
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
prompt = "User: Hello\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=80,
do_sample=True,
top_p=0.9,
temperature=0.8,
pad_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Training Data
Fine-tuned on a local rehabilitation dialogue dataset. A public dataset repo is provided separately.
Limitations
This model is intended for supportive, non-clinical conversation. It is not a substitute for professional medical or mental health advice.
Model Card Contact
[More Information Needed]
Framework versions
- PEFT 0.17.1
- Downloads last month
- 139
Model tree for cyril21/breakfree-dialo-lora
Base model
microsoft/DialoGPT-medium