dynabench/dynasent
Updated • 469 • 7
Three-way sentiment classifier (negative · neutral · positive) built on ModernBERT-base and fine-tuned with the TRABSA head (mean-pool ➜ BiLSTM ➜ token-attention ➜ MLP) using Cross-Entropy loss.
| Developer | I. Bachelis |
| Model type | Encoder with task head |
| Languages | English |
| License | Apache-2.0 |
| Finetuned from | answerdotai/ModernBERT-base |
| Params | 110 M (backbone) + ≈3 M (head) |
| Precision | fp16 (FlashAttention) |
| Token limit | 128 |
| Use-case | Users |
|---|---|
| Sentiment scoring of short English texts (tweets, reviews) | Practitioners, researchers |
| Feature extractor for downstream ABSA / stance tasks | NLP developers |
Recommendation: For deployment on new domains, run a small domain-adaptive fine-tune and monitor neutral/negative confusion.
from transformers import AutoTokenizer, AutoModel
import torch
m = "iabachelis/ModernBERT-TRABSA-CE"
tok = AutoTokenizer.from_pretrained(m)
model = AutoModel.from_pretrained(
m, trust_remote_code=True).eval()
text = "The film is visually stunning, but painfully slow."
inputs = tok(text, return_tensors="pt")
probs = model(**inputs).logits.softmax(-1).squeeze()
id2cls = {0:"negative",1:"neutral",2:"positive"}
print({id2cls[i]: float(p) for i,p in enumerate(probs)})
Base model
answerdotai/ModernBERT-base