Commit
·
481ce5e
1
Parent(s):
d6ba0ed
Update README.md
Browse files
README.md
CHANGED
|
@@ -23,14 +23,14 @@ For quick start please have a look this [demo](https://github.com/ahmedssabir/Te
|
|
| 23 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
| 24 |
relation. In particular, we use Sentence-RoBERTa-sts via cosine similarity to give a soft score, and then
|
| 25 |
we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
| 26 |
-
of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow CNN (
|
| 27 |
to estimate the visual relatedness score.
|
| 28 |
|
| 29 |
-
|
| 30 |
|
| 31 |
<!--
|
| 32 |
## Dataset
|
| 33 |
-
|
| 34 |
### Sample
|
| 35 |
|
| 36 |
```
|
|
|
|
| 23 |
(3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
|
| 24 |
relation. In particular, we use Sentence-RoBERTa-sts via cosine similarity to give a soft score, and then
|
| 25 |
we use a threshold to annotate the final label (if th ≥ 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
|
| 26 |
+
of the visual overlap between caption and visual context, and to extract global information, we use BERT followed by a shallow 1D-CNN (Kim, 2014)
|
| 27 |
to estimate the visual relatedness score.
|
| 28 |
|
| 29 |
+
|
| 30 |
|
| 31 |
<!--
|
| 32 |
## Dataset
|
| 33 |
+
(<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
|
| 34 |
### Sample
|
| 35 |
|
| 36 |
```
|