nilesh2797
commited on
Commit
·
efcae58
1
Parent(s):
4b6facb
update readme
Browse files
README.md
CHANGED
|
@@ -10,4 +10,44 @@ tags:
|
|
| 10 |
pipeline_tag: sentence-similarity
|
| 11 |
---
|
| 12 |
|
| 13 |
-
Distilbert encoder models trained on Amazon product-to-product recommendation dataset (LF-AmazonTitles-1.3M) using the DEXML (Dual Encoder for eXtreme Multi-Label classification, ICLR'24) method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
pipeline_tag: sentence-similarity
|
| 11 |
---
|
| 12 |
|
| 13 |
+
Distilbert encoder models trained on Amazon product-to-product recommendation dataset (LF-AmazonTitles-1.3M) using the [DEXML](https://github.com/nilesh2797/DEXML) ([Dual Encoder for eXtreme Multi-Label classification, ICLR'24](https://arxiv.org/pdf/2310.10636v2.pdf)) method.
|
| 14 |
+
|
| 15 |
+
## Inference Usage (Sentence-Transformers)
|
| 16 |
+
With `sentence-transformers` installed you can use this model as following:
|
| 17 |
+
```python
|
| 18 |
+
from sentence_transformers import SentenceTransformer
|
| 19 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
| 20 |
+
model = SentenceTransformer('quicktensor/dexml_lf-amazontitles-1.3m')
|
| 21 |
+
embeddings = model.encode(sentences)
|
| 22 |
+
print(embeddings)
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
## Usage (HuggingFace Transformers)
|
| 26 |
+
With huggingface transformers you only need to be a bit careful with how you pool the transformer output to get the embedding, you can use this model as following;
|
| 27 |
+
```python
|
| 28 |
+
from transformers import AutoTokenizer, AutoModel
|
| 29 |
+
|
| 30 |
+
pooler = lambda x: x[:, 0, :] # Choose CLS token and normalize
|
| 31 |
+
|
| 32 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
| 33 |
+
tokenizer = AutoTokenizer.from_pretrained('quicktensor/dexml_lf-amazontitles-1.3m')
|
| 34 |
+
model = AutoModel.from_pretrained('quicktensor/dexml_lf-amazontitles-1.3m')
|
| 35 |
+
|
| 36 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
| 37 |
+
with torch.no_grad():
|
| 38 |
+
embeddings = pooler(model(**encoded_input))
|
| 39 |
+
|
| 40 |
+
print(embeddings)
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
## Cite
|
| 44 |
+
If you found this model helpful, please cite our work as:
|
| 45 |
+
```bib
|
| 46 |
+
@InProceedings{DEXML,
|
| 47 |
+
author = "Gupta, N. and Khatri, D. and Rawat, A-S. and Bhojanapalli, S. and Jain, P. and Dhillon, I.",
|
| 48 |
+
title = "Dual-encoders for Extreme Multi-label Classification",
|
| 49 |
+
booktitle = "International Conference on Learning Representations",
|
| 50 |
+
month = "May",
|
| 51 |
+
year = "2024"
|
| 52 |
+
}
|
| 53 |
+
```
|