Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:101540
loss:MultipleNegativesRankingLoss
text-embeddings-inference
Instructions to use lingtrain/labse-udmurt with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use lingtrain/labse-udmurt with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("lingtrain/labse-udmurt") sentences = [ "Пилэн пытьыез ышиз.", "— А знаете, ребята?", "Следы мальчика потеряны.", "— Ты прости меня, — иначе нельзя!" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
| { | |
| "added_tokens_decoder": { | |
| "0": { | |
| "content": "[PAD]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "100": { | |
| "content": "[UNK]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "101": { | |
| "content": "[CLS]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "102": { | |
| "content": "[SEP]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| }, | |
| "103": { | |
| "content": "[MASK]", | |
| "lstrip": false, | |
| "normalized": false, | |
| "rstrip": false, | |
| "single_word": false, | |
| "special": true | |
| } | |
| }, | |
| "clean_up_tokenization_spaces": true, | |
| "cls_token": "[CLS]", | |
| "do_basic_tokenize": true, | |
| "do_lower_case": false, | |
| "full_tokenizer_file": null, | |
| "mask_token": "[MASK]", | |
| "model_max_length": 256, | |
| "never_split": null, | |
| "pad_token": "[PAD]", | |
| "sep_token": "[SEP]", | |
| "strip_accents": null, | |
| "tokenize_chinese_chars": true, | |
| "tokenizer_class": "BertTokenizer", | |
| "unk_token": "[UNK]" | |
| } | |