indonlp/indonlu
Updated • 1.19k • 40
How to use LazarusNLP/NusaBERT-base-NERP with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="LazarusNLP/NusaBERT-base-NERP") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("LazarusNLP/NusaBERT-base-NERP")
model = AutoModelForTokenClassification.from_pretrained("LazarusNLP/NusaBERT-base-NERP")This model is a fine-tuned version of LazarusNLP/NusaBERT-base on the indonlu dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| No log | 1.0 | 420 | 0.1444 | 0.7415 | 0.8272 | 0.7820 | 0.9543 |
| 0.2385 | 2.0 | 840 | 0.1276 | 0.7879 | 0.8187 | 0.8030 | 0.9586 |
| 0.1143 | 3.0 | 1260 | 0.1260 | 0.7815 | 0.8510 | 0.8148 | 0.9597 |
| 0.0903 | 4.0 | 1680 | 0.1305 | 0.7836 | 0.8516 | 0.8162 | 0.9596 |
| 0.07 | 5.0 | 2100 | 0.1342 | 0.8158 | 0.8255 | 0.8206 | 0.9605 |
| 0.0582 | 6.0 | 2520 | 0.1343 | 0.8172 | 0.8408 | 0.8288 | 0.9606 |
| 0.0582 | 7.0 | 2940 | 0.1440 | 0.7936 | 0.8476 | 0.8197 | 0.9594 |
| 0.0521 | 8.0 | 3360 | 0.1447 | 0.8069 | 0.8453 | 0.8257 | 0.9605 |
| 0.0446 | 9.0 | 3780 | 0.1512 | 0.7996 | 0.8453 | 0.8218 | 0.9599 |
| 0.0417 | 10.0 | 4200 | 0.1524 | 0.8078 | 0.8453 | 0.8261 | 0.9606 |
Base model
LazarusNLP/NusaBERT-base