Model Card for Model ID
This repository contains TangkhulBERT, the first publicly available foundational language model for the Tangkhul language, a low-resource Tibeto-Burman language. The model was trained from scratch using a Masked Language Modeling (MLM) objective.
Model Details
Model Description TangkhulBERT is a transformer-based model with a BERT-base architecture. It was developed to provide a crucial NLP resource for the Tangkhul language community and to serve as a starting point for various downstream tasks.
Model Description
- Developed by: Vinos shimray
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Model type: BERT Base
- Language(s) (NLP): Tangkhul
- License: apache-2.0
- Finetuned from model [optional]: This model was trained from scratch and not fine-tuned from any other model.
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
The model is intended for direct use in Masked Language Modeling tasks.
from transformers import pipeline
fill_mask = pipeline( "fill-mask", model="vinshim/TangkhulBERT", tokenizer="VinosShimray/TangkhulBERT" )
Test with a Tangkhul sentence
result = fill_mask(" [MASK].")
Print the top predictions
for prediction in result: print(prediction)
Downstream Use [optional]
This model is designed to be a foundational, pre-trained model for fine-tuning on specific downstream tasks such as:
Text Classification (e.g., sentiment analysis, topic categorization)
Named Entity Recognition (NER)
Question Answering
Machine Translation (as an encoder)
Out-of-Scope Use
This model is not intended for generating long-form, coherent text. Due to the limited size of the training corpus, it should not be used in safety-critical applications or for tasks requiring deep, nuanced world knowledge. The model only understands Tangkhul and will not perform well on other languages.
Bias, Risks, and Limitations
The primary limitation is the size of the pre-training corpus (4 MB). While significant for a low-resource language, this is small compared to models for high-resource languages. The model will reflect any biases present in the source text data. Its knowledge is confined to the domains covered in the training corpus and may not generalize well to other contexts.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model for Masked Language Modeling. from transformers import pipeline, AutoTokenizer, AutoModelForMaskedLM
Replace with your Hugging Face username and repo name
repo_id = "vinshim/TangkhulBERT"
Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(repo_id) model = AutoModelForMaskedLM.from_pretrained(repo_id)
Create the pipeline
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
Use the model
result = fill_mask("Kazing eina ngalei [MASK].") print(result)
Training Details
Training Data
The model was pre-trained on a 4 MB plain-text corpus of the Tangkhul language, collected from various digital sources. This data is not available for download but can be described as general-purpose text.
Training Procedure
Preprocessing [optional]
The text was preprocessed by:
1.Converting all text to lowercase.
Ensuring a sentence-per-line format.
Programmatically adding a full stop (.) to every line that lacked sentence-ending punctuation. [More Information Needed]
Training Hyperparameters
- Training regime: fp16 mixed precision Epochs: 500 Batch Size: 128 Optimizer: AdamW with default settings Learning Rate: 5e-5
Speeds, Sizes, Times [optional]
The pre-training was conducted over approximately 3 hours on a single NVIDIA A100 GPU.
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
The primary evaluation metric during pre-training was Training Loss, which is a measure of the model's perplexity on the Masked Language Modeling task.
Results
The model achieved a final pre-training loss of 2.9969 after 22,000 training steps.
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 9
Model tree for vinshim/TangkhulBERT
Base model
google-bert/bert-base-uncased