Instructions to use aajrami/bert-sr-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aajrami/bert-sr-small with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="aajrami/bert-sr-small")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("aajrami/bert-sr-small") model = AutoModel.from_pretrained("aajrami/bert-sr-small") - Notebooks
- Google Colab
- Kaggle
bert-sr-small
A small-size BERT Language Model with a shuffle + random pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to How does the pre-training objective affect what large language models learn about linguistic properties?
License
CC BY 4.0
Citation
If you use this model, please cite the following paper:
@inproceedings{alajrami2022does,
title={How does the pre-training objective affect what large language models learn about linguistic properties?},
author={Alajrami, Ahmed and Aletras, Nikolaos},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
pages={131--147},
year={2022}
}
- Downloads last month
- 6