Sentence Similarity
sentence-transformers
Safetensors
English
feature-extraction
dense
Generated from Trainer
dataset_size:1375067
loss:MultipleNegativesRankingLoss
Instructions to use kamp0010/test with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use kamp0010/test with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("kamp0010/test") sentences = [ "Modify the inner parameters of the Kepler propagator in order to place\n the spacecraft in the right Sphere of Influence", "func (c *Conn) SetDeadline(t time.Time) error {\n\treturn c.p.SetDeadline(t)\n}", "def _change_soi(self, body):\n \n\n if body == self.central:\n self.bodies = [self.central]\n self.step = self.central_step\n self.active = self.central.name\n self.frame = self.central.name\n else:\n soi = self.SOI[body.name]\n self.bodies = [body]\n self.step = self.alt_step\n self.active = body.name\n self.frame = soi.frame", "def main(args=None):\n \"\"\"\"\"\"\n parser = _parser()\n\n # Python 2 will error 'too few arguments' if no subcommand is supplied.\n # No such error occurs in Python 3, which makes it feasible to check\n # whether a subcommand was provided (displaying a help message if not).\n # argparse internals vary significantly over the major versions, so it's\n # much easier to just override the args passed to it. In this case, print\n # the usage message if there are no args.\n if args is None and len(sys.argv) <= 1:\n sys.argv.append('--help')\n\n options = parser.parse_args(args)\n\n # pass options to subcommand\n options.func(options)\n\n return 0" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
File size: 695 Bytes
d7d31ca | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | {
"cls_token": {
"content": "[CLS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"mask_token": {
"content": "[MASK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"sep_token": {
"content": "[SEP]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}
|