SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper
•
2405.14734
•
Published
•
12
This is quantized version of princeton-nlp/Llama-3-Instruct-8B-RDPO created using llama.cpp
This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
princeton-nlp/Llama-3-Instruct-8B-RDPO