Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Paper • 1908.10084 • Published • 13
How to use srikarvar/e-small-triplet-balanced with sentence-transformers:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("srikarvar/e-small-triplet-balanced")
sentences = [
"How do I publish articles?",
"How do I publish an article?",
"Steps to meditate",
"How I publish my article on Yahoo?"
]
embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [4, 4]This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/e-small-triplet-balanced")
# Run inference
sentences = [
"After marriage, why do women have to change their surnames to their husband’s? Why can't they keep their maiden ones?",
'After marriage, why do women have to change their surname?',
'Is it possible for an Indian woman not to change her surname after marriage?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
triplet-validationTripletEvaluator| Metric | Value |
|---|---|
| cosine_accuracy | 0.9917 |
| dot_accuracy | 0.0083 |
| manhattan_accuracy | 0.9917 |
| euclidean_accuracy | 0.9917 |
| max_accuracy | 0.9917 |
anchor, positive, and negative| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| anchor | positive | negative |
|---|---|---|
What are the ingredients of a pizza? |
ingredients of pizza? |
What are the ingredients of a burger? |
How does photosynthesis work? |
Explain the process of photosynthesis |
How does respiration work? |
How do I reset my password? |
Steps to reset password |
How do I change my username? |
TripletLoss with these parameters:{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
anchor, positive, and negative| anchor | positive | negative | |
|---|---|---|---|
| type | string | string | string |
| details |
|
|
|
| anchor | positive | negative |
|---|---|---|
What is the best way to learn a new language? |
How can I effectively learn a new language? |
What is the fastest way to travel? |
Can people actively control their emotions? |
Does our mind control our emotions? |
How can I control my positive emotions for the people whom I love but they don't care about me? |
Which can be the best laptop under 30000? |
which laptop will be best under Rs 30,000? |
What is the best phone to buy under 30000 in India? |
TripletLoss with these parameters:{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
eval_strategy: epochper_device_train_batch_size: 32per_device_eval_batch_size: 32gradient_accumulation_steps: 2learning_rate: 3e-05weight_decay: 0.01num_train_epochs: 8lr_scheduler_type: reduce_lr_on_plateauwarmup_ratio: 0.1load_best_model_at_end: Trueoptim: adamw_torch_fusedoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: epochprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 2eval_accumulation_steps: Nonelearning_rate: 3e-05weight_decay: 0.01adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 8max_steps: -1lr_scheduler_type: reduce_lr_on_plateaulr_scheduler_kwargs: {}warmup_ratio: 0.1warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Trueignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy |
|---|---|---|---|---|
| 0.5263 | 10 | 4.8459 | - | - |
| 1.0 | 19 | - | 4.4155 | - |
| 1.0526 | 20 | 4.7205 | - | - |
| 1.5789 | 30 | 4.5948 | - | - |
| 2.0 | 38 | - | 4.2163 | - |
| 2.1053 | 40 | 4.5125 | - | - |
| 2.6316 | 50 | 4.4761 | - | - |
| 3.0 | 57 | - | 4.1338 | - |
| 3.1579 | 60 | 4.452 | - | - |
| 3.6842 | 70 | 4.4082 | - | - |
| 4.0 | 76 | - | 4.0659 | - |
| 4.2105 | 80 | 4.3978 | - | - |
| 4.7368 | 90 | 4.3495 | - | - |
| 5.0 | 95 | - | 4.0202 | - |
| 5.2632 | 100 | 4.287 | - | - |
| 5.7895 | 110 | 4.2805 | - | - |
| 6.0 | 114 | - | 3.9441 | - |
| 6.3158 | 120 | 4.2631 | - | - |
| 6.8421 | 130 | 4.213 | - | - |
| 7.0 | 133 | - | 3.8866 | - |
| 7.3684 | 140 | 4.1921 | - | - |
| 7.8947 | 150 | 4.1854 | - | - |
| 8.0 | 152 | - | 3.8757 | 0.9917 |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Base model
intfloat/multilingual-e5-small