NorBERT3-absa-coarse-sent

This model is a fine-tuned version of NorBERT3-large, applied on the sentence-level NorPaC_absa dataset. The model is trained on a total of 25 unique coarse aspect+sentiment labels.

Example Usage

import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("ltg/norbert3-coarse-absa", trus_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("ltg/norbert3-coarse-absa", trust_remote_code=True)

model.eval()

text = "fastlegen lytter til meg, men jeg synes ventetiden er for lang."

# tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)

# Run inference
with torch.no_grad():
    outputs = model(**inputs)

# Get predictions
threshold = 0.5
probs = torch.sigmoid(outputs.logits).squeeze()
predictions = [model.config.id2label[i] for i, prob in enumerate(probs) if prob > threshold]
print(predictions)
# -> ['staff_pos', 'avail_neg'] (Healthcare providers and staff:positive, Access and availability:negative)

Class labels

List of labels and abbreviations coming.

Evaluation

GP SMH
60.76\std{1.84} 66.79\std{1.66}

The table shows weighted avg. F1 scores averaged across five seeds for the General Practitioner (GP) and Special Mental Healthcare (SMH) domain.

Citation

Coming.

Downloads last month
35
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ltg/norbert3-coarse-absa

Base model

ltg/norbert3-large
Finetuned
(8)
this model

Collection including ltg/norbert3-coarse-absa