DistilBERT for Sarcasm Detection π
This is a fine-tuned DistilBERT model on the News Headlines Dataset for Sarcasm Detection.
π Dataset
- Source: News Headlines Dataset for Sarcasm Detection
- Task: Binary classification (
0 = Not Sarcastic,1 = Sarcastic) - Size: ~28,000 headlines
π§ Model Training
- Framework: Hugging Face Transformers
- Tokenizer:
distilbert-base-uncased - Training epochs: 3
- Optimizer: AdamW
- Batch size: 16
π Performance
| Model | Accuracy |
|---|---|
| DistilBERT (ours) | 93.1% |
| GRU | 85.3% |
| LSTM | 84.6% |
| Logistic Regression | 83.4% |
| SVM | 82.9% |
| Naive Bayes | 82.7% |
π Usage
from transformers import pipeline
# Load the model from HF Hub
classifier = pipeline("text-classification", model="YamenRM/sarcasm_model")
# Example
text = "Oh great, another Monday morning meeting!"
print(classifier(text))
Output:
[{'label': 'SARCASTIC', 'score': 0.93}]
β¨ Author
Trained and uploaded by YamenRM .
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support