recommendations across your fine-tuned models
Hi! I've been testing out your different readability models. I'm curious if you have any personal recommendations on different use cases for the differences between them. Because so far it seems that I get roughly the same scores when I test the 5-6 models on the same text.
Hi!
Thanks for testing out those models. They were trained on the same data but on different base models. Therefore, they should give consistent outputs no matter which models you use.
I recommend the latest models first. They're good enough for most purposes (see the validation plots):
- English: agentlans/deberta-v3-xsmall-zyda-2-readability
- Other languages: agentlans/multilingual-e5-small-aligned-readability
These models are bigger, more accurate, but slower models (the ones with (m)deberta-v3-base in them):
- English: agentlans/deberta-v3-base-zyda-2-readability
- Other languages: agentlans/mdeberta-v3-base-readability
The rest are kept for historical and reproducibility reasons but they're accurate and popular as well.
Of course, if you have English text then use the English models which were specifically trained for that language.
Hope this helps!