Instructions to use tum-nlp/roberta-target-demographic-classifier with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tum-nlp/roberta-target-demographic-classifier with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="tum-nlp/roberta-target-demographic-classifier")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("tum-nlp/roberta-target-demographic-classifier") model = AutoModelForSequenceClassification.from_pretrained("tum-nlp/roberta-target-demographic-classifier") - Notebooks
- Google Colab
- Kaggle
Target-Demographic Classifier
The roBERTa-based target-demographic classifier is finetuned on the CONAN dataset for classifying whether a response's content is about one or multiple of the 8 target demographics, based on the topic classifier cardiffnlp/tweet-topic-21-multi
Currently trained for the following classes: ["MIGRANTS", "POC", "LGBT+", "MUSLIMS", "WOMEN", "JEWS", "other", "DISABLED"]
Uses
The model is intended for classifying LM-generated dialogue responses, and evaluating their relevancy to the given input sequence.
- Downloads last month
- 5