DistilBERT for Sarcasm Detection 🎭

This is a fine-tuned DistilBERT model on the News Headlines Dataset for Sarcasm Detection.

πŸ“Š Dataset

🧠 Model Training

  • Framework: Hugging Face Transformers
  • Tokenizer: distilbert-base-uncased
  • Training epochs: 3
  • Optimizer: AdamW
  • Batch size: 16

πŸ“ˆ Performance

Model Accuracy
DistilBERT (ours) 93.1%
GRU 85.3%
LSTM 84.6%
Logistic Regression 83.4%
SVM 82.9%
Naive Bayes 82.7%

πŸš€ Usage

from transformers import pipeline

# Load the model from HF Hub
classifier = pipeline("text-classification", model="YamenRM/sarcasm_model")

# Example
text = "Oh great, another Monday morning meeting!"
print(classifier(text))

Output:

[{'label': 'SARCASTIC', 'score': 0.93}]

✨ Author

Trained and uploaded by YamenRM .

Downloads last month
1
Safetensors
Model size
67M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support