modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
hfl/chinese-macbert-base
a986e004d2a7f2a1c2f5a3edef4e20604a974ed1
2021-05-19T19:09:45.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
hfl
null
hfl/chinese-macbert-base
36,823,840
43
transformers
0
--- language: - zh tags: - bert license: "apache-2.0" --- <p align="center"> <br> <img src="https://github.com/ymcui/MacBERT/raw/master/pics/banner.png" width="500"/> <br> </p> <p align="center"> <a href="https://github.com/ymcui/MacBERT/blob/master/LICENSE"> <img alt="GitHub" src="https://img....
microsoft/deberta-base
7d4c0126b06bd59dccd3e48e467ed11e37b77f3f
2022-01-13T13:56:18.000Z
[ "pytorch", "tf", "rust", "deberta", "en", "arxiv:2006.03654", "transformers", "deberta-v1", "license:mit" ]
null
false
microsoft
null
microsoft/deberta-base
23,662,412
15
transformers
1
--- language: en tags: deberta-v1 thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It...
bert-base-uncased
418430c3b5df7ace92f2aede75700d22c78a0f95
2022-06-06T11:41:24.000Z
[ "pytorch", "tf", "jax", "rust", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-uncased
22,268,934
204
transformers
2
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](http...
gpt2
6c0e6080953db56375760c0471a8c5f2929baf11
2021-05-19T16:25:59.000Z
[ "pytorch", "tf", "jax", "tflite", "rust", "gpt2", "text-generation", "en", "transformers", "exbert", "license:mit" ]
text-generation
false
null
null
gpt2
11,350,803
164
transformers
3
--- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better...
distilbert-base-uncased
043235d6088ecd3dd5fb5ca3592b6913fd516027
2022-05-31T19:08:36.000Z
[ "pytorch", "tf", "jax", "rust", "distilbert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
distilbert-base-uncased
11,250,037
70
transformers
4
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # DistilBERT base model (uncased) This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the disti...
Jean-Baptiste/camembert-ner
dbec8489a1c44ecad9da8a9185115bccabd799fe
2022-04-04T01:13:33.000Z
[ "pytorch", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "transformers", "autotrain_compatible" ]
token-classification
false
Jean-Baptiste
null
Jean-Baptiste/camembert-ner
9,833,060
11
transformers
5
--- language: fr datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Je m'appelle jean-baptiste et je vis à montréal" - text: "george washington est allé à washington" --- # camembert-ner: model fine-tuned from camemBERT for NER task. ## Introduction [camembert-ner] is a NER model that was fine-tuned from camemBER...
bert-base-cased
a8d257ba9925ef39f3036bfc338acf5283c512d9
2021-09-06T08:07:18.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-cased
7,598,326
30
transformers
6
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https:...
roberta-base
251c3c36356d3ad6845eb0554fdb9703d632c6cc
2021-07-06T10:34:50.000Z
[ "pytorch", "tf", "jax", "rust", "roberta", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
roberta-base
7,254,067
45
transformers
7
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # RoBERTa base model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com...
SpanBERT/spanbert-large-cased
a49cba45de9565a5d3e7b089a94dbae679e64e79
2021-05-19T11:31:33.000Z
[ "pytorch", "jax", "bert", "transformers" ]
null
false
SpanBERT
null
SpanBERT/spanbert-large-cased
7,120,559
3
transformers
8
Entry not found
xlm-roberta-base
f6d161e8f5f6f2ed433fb4023d6cb34146506b3f
2022-06-06T11:40:43.000Z
[ "pytorch", "tf", "jax", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha"...
fill-mask
false
null
null
xlm-roberta-base
6,960,013
42
transformers
9
--- tags: - exbert language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn ...
distilbert-base-uncased-finetuned-sst-2-english
00c3f1ef306e837efb641eaca05d24d161d9513c
2022-07-22T08:00:55.000Z
[ "pytorch", "tf", "rust", "distilbert", "text-classification", "en", "dataset:sst2", "dataset:glue", "transformers", "license:apache-2.0", "model-index" ]
text-classification
false
null
null
distilbert-base-uncased-finetuned-sst-2-english
5,401,984
77
transformers
10
--- language: en license: apache-2.0 datasets: - sst2 - glue model-index: - name: distilbert-base-uncased-finetuned-sst-2-english results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: sst2 split: validation metrics: ...
distilroberta-base
c1149320821601524a8d373726ed95bbd2bc0dc2
2022-07-22T08:13:21.000Z
[ "pytorch", "tf", "jax", "rust", "roberta", "fill-mask", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
distilroberta-base
5,192,102
21
transformers
11
--- language: en tags: - exbert license: apache-2.0 datasets: - openwebtext --- # Model Card for DistilRoBERTa base # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluat...
distilgpt2
ca98be8f8f0994e707b944a9ef55e66fbcf9e586
2022-07-22T08:12:56.000Z
[ "pytorch", "tf", "jax", "tflite", "rust", "gpt2", "text-generation", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "transformers", "exbert", "license:apache-2.0", "model-index", "co2_eq_emissions" ...
text-generation
false
null
null
distilgpt2
4,525,173
77
transformers
12
--- language: en tags: - exbert license: apache-2.0 datasets: - openwebtext model-index: - name: distilgpt2 results: - task: type: text-generation name: Text Generation dataset: type: wikitext name: WikiText-103 metrics: - type: perplexity name: Perplexity ...
cross-encoder/ms-marco-MiniLM-L-12-v2
97f7dcbdd6ab58fe7f44368c795fc5200b48fcbe
2021-08-05T08:39:01.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
cross-encoder
null
cross-encoder/ms-marco-MiniLM-L-12-v2
3,951,063
10
transformers
13
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
albert-base-v2
51dbd9db43a0c6eba97f74b91ce26fface509e0b
2021-08-30T12:04:48.000Z
[ "pytorch", "tf", "jax", "rust", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
albert-base-v2
3,862,051
15
transformers
14
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-rese...
bert-base-chinese
38fda776740d17609554e879e3ac7b9837bdb5ee
2022-07-22T08:09:06.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "transformers", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-chinese
3,660,463
107
transformers
15
--- language: zh --- # Bert-base-chinese ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [How to Get Started With the Model](#how-to-get-started-with-the-model) # Model Detai...
bert-base-multilingual-cased
aff660c4522e466f4d0de19eaf94f91e4e2e7375
2021-05-18T16:18:16.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "multilingual", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-multilingual-cased
3,089,919
40
transformers
16
--- language: multilingual license: apache-2.0 datasets: - wikipedia --- # BERT multilingual base model (cased) Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released ...
xlm-roberta-large-finetuned-conll03-english
33a83d9855a119c0453ce450858c07835a0bdbed
2022-07-22T08:04:08.000Z
[ "pytorch", "rust", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", ...
token-classification
false
null
null
xlm-roberta-large-finetuned-conll03-english
2,851,282
23
transformers
17
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my ...
tals/albert-xlarge-vitaminc-mnli
4c79eb5353f6104eb148d9221560c913f45677c7
2022-06-24T01:33:47.000Z
[ "pytorch", "tf", "albert", "text-classification", "python", "dataset:fever", "dataset:glue", "dataset:multi_nli", "dataset:tals/vitaminc", "transformers" ]
text-classification
false
tals
null
tals/albert-xlarge-vitaminc-mnli
2,529,752
null
transformers
18
--- language: python datasets: - fever - glue - multi_nli - tals/vitaminc --- # Details Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`). For more details see: https://github.com/TalSchuster/VitaminC When ...
bert-large-uncased
3835a195d41f7ddc47d5ecab84b64f71d6f144e9
2021-05-18T16:40:29.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-large-uncased
2,362,221
9
transformers
19
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com...
valhalla/t5-small-qa-qg-hl
a9d81e686f2169360fd59d8329235d3c4ba74f4f
2021-06-23T14:42:41.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "dataset:squad", "arxiv:1910.10683", "transformers", "question-generation", "license:mit", "autotrain_compatible" ]
text2text-generation
false
valhalla
null
valhalla/t5-small-qa-qg-hl
2,171,047
5
transformers
20
--- datasets: - squad tags: - question-generation widget: - text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>" - text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>" license: mit --- ## T5 for multi-task QA and QG This is multi-...
google/t5-v1_1-xl
a9e51c46bd6f3893213c51edf9498be6f0426797
2020-11-19T19:55:34.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/t5-v1_1-xl
1,980,571
3
transformers
21
--- language: en datasets: - c4 license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 ## Version 1.1 [T5 Version 1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511) includes the f...
sentence-transformers/all-MiniLM-L6-v2
717413c64de70e37b55cf53c9cdff0e2d331fac3
2022-07-11T21:08:45.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:MS Marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_...
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-MiniLM-L6-v2
1,933,749
60
sentence-transformers
22
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - nat...
sentence-transformers/paraphrase-MiniLM-L6-v2
68b97aaedb0c72be3c88c1af64296b3bbb8001fa
2022-06-15T18:39:43.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-MiniLM-L6-v2
1,710,481
16
sentence-transformers
23
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dens...
t5-small
d78aea13fa7ecd06c29e3e46195d6341255065d5
2022-07-22T08:11:14.000Z
[ "pytorch", "tf", "jax", "rust", "t5", "text2text-generation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", ...
translation
false
null
null
t5-small
1,707,833
20
transformers
24
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- # Model Card for T5 Small ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876...
facebook/bart-large-mnli
c626438eeca63a93bd6024b0a0fbf8b3c0c30d7b
2021-08-09T08:25:07.000Z
[ "pytorch", "jax", "rust", "bart", "text-classification", "dataset:multi_nli", "arxiv:1910.13461", "arxiv:1909.00161", "transformers", "license:mit", "zero-shot-classification" ]
zero-shot-classification
false
facebook
null
facebook/bart-large-mnli
1,668,146
147
transformers
25
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png pipeline_tag: zero-shot-classification datasets: - multi_nli --- # bart-large-mnli This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/da...
cardiffnlp/twitter-xlm-roberta-base-sentiment
f3e34b6c30bf27b6649f72eca85d0bbe79df1e55
2022-06-22T19:15:32.000Z
[ "pytorch", "tf", "xlm-roberta", "text-classification", "multilingual", "arxiv:2104.12250", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-xlm-roberta-base-sentiment
1,479,744
25
transformers
26
--- language: multilingual widget: - text: "🤗" - text: "T'estimo! ❤️" - text: "I love you!" - text: "I hate you 🤮" - text: "Mahal kita!" - text: "사랑해!" - text: "난 너가 싫어" - text: "😍😍😍" --- # twitter-XLM-roBERTa-base for Sentiment Analysis This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and ...
roberta-large
619fd8c2ca2bc7ac3959b7f71b6c426c897ba407
2021-05-21T08:57:02.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "transformers", "exbert", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
roberta-large
1,479,252
39
transformers
27
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # RoBERTa large model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](htt...
DeepPavlov/rubert-base-cased-conversational
645946ce91842a52eaacb2705c77e59194145ffa
2021-11-08T13:06:54.000Z
[ "pytorch", "jax", "bert", "feature-extraction", "ru", "transformers" ]
feature-extraction
false
DeepPavlov
null
DeepPavlov/rubert-base-cased-conversational
1,418,924
5
transformers
28
--- language: - ru --- # rubert-base-cased-conversational Conversational RuBERT \(Russian, cased, 12‑layer, 768‑hidden, 12‑heads, 180M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\]. We assembled a new vocabulary f...
microsoft/codebert-base
3b0952feddeffad0063f274080e3c23d75e7eb39
2022-02-11T19:59:44.000Z
[ "pytorch", "tf", "jax", "rust", "roberta", "feature-extraction", "arxiv:2002.08155", "transformers" ]
feature-extraction
false
microsoft
null
microsoft/codebert-base
1,347,269
30
transformers
29
## CodeBERT-base Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155). ### Training Data The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet) ### Training Objective This model is i...
ProsusAI/finbert
5ea63b3d0c737ad6f06e061d9af36b1f7bbd1a4b
2022-06-03T06:34:37.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "en", "arxiv:1908.10063", "transformers", "financial-sentiment-analysis", "sentiment-analysis" ]
text-classification
false
ProsusAI
null
ProsusAI/finbert
1,254,493
81
transformers
30
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "Stocks rallied and the British pound gained." --- FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financi...
t5-base
23aa4f41cb7c08d4b05c8f327b22bfa0eb8c7ad9
2022-07-22T08:10:56.000Z
[ "pytorch", "tf", "jax", "rust", "t5", "text2text-generation", "en", "fr", "ro", "de", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", ...
translation
false
null
null
t5-base
1,234,008
53
transformers
31
--- language: - en - fr - ro - de datasets: - c4 tags: - summarization - translation license: apache-2.0 --- # Model Card for T5 Base ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e516638767...
deepset/roberta-base-squad2
d3c3bb6f2aaec6bf057fbf3796af9c5b9b939758
2022-07-22T11:42:08.000Z
[ "pytorch", "tf", "jax", "rust", "roberta", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/roberta-base-squad2
1,111,876
92
transformers
32
--- language: en datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/roberta-base-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exac...
distilbert-base-cased-distilled-squad
1b9d42b637aed70c9f3cd27e13b66ee9f847ed03
2022-07-22T07:57:01.000Z
[ "pytorch", "tf", "rust", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
null
null
distilbert-base-cased-distilled-squad
1,064,466
15
transformers
33
--- language: en datasets: - squad metrics: - squad license: apache-2.0 --- # DistilBERT base cased distilled SQuAD ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-...
xlm-roberta-large
b2a6150f8be56457baf80c74342cc424080260f0
2022-06-27T11:25:40.000Z
[ "pytorch", "tf", "jax", "xlm-roberta", "fill-mask", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha"...
fill-mask
false
null
null
xlm-roberta-large
1,017,218
24
transformers
34
--- tags: - exbert language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn -...
facebook/wav2vec2-base-960h
706111756296bc76512407a11e69526cf4e22aae
2022-06-30T00:05:41.000Z
[ "pytorch", "tf", "wav2vec2", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "transformers", "audio", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
facebook
null
facebook/wav2vec2-base-960h
986,202
57
transformers
35
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface....
daigo/bert-base-japanese-sentiment
51ac2d2c0a5645d77ca26078fc5f02c349fbb93d
2021-05-19T14:36:34.000Z
[ "pytorch", "jax", "bert", "text-classification", "ja", "transformers" ]
text-classification
false
daigo
null
daigo/bert-base-japanese-sentiment
972,842
7
transformers
36
--- language: - ja --- binary classification # Usage ``` print(pipeline("sentiment-analysis",model="daigo/bert-base-japanese-sentiment",tokenizer="daigo/bert-base-japanese-sentiment")("私は幸福である。")) [{'label': 'ポジティブ', 'score': 0.98430425}] ```
bert-base-multilingual-uncased
99406b9f2cfa046409626308a01da45a2a078f62
2021-05-18T16:19:22.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-base-multilingual-uncased
970,081
13
transformers
37
--- language: en license: apache-2.0 datasets: - wikipedia --- # BERT multilingual base model (uncased) Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this...
sentence-transformers/all-mpnet-base-v2
bd44305fd6a1b43c16baf96765e2ecb20bca8e1d
2022-07-11T21:01:04.000Z
[ "pytorch", "mpnet", "fill-mask", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:MS Marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset...
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-mpnet-base-v2
966,231
43
sentence-transformers
38
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - nat...
sentence-transformers/all-MiniLM-L12-v2
9e16800aed25dbd1a96dfa6949c68c4d81d5dded
2022-07-11T21:05:39.000Z
[ "pytorch", "rust", "bert", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:MS Marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikih...
sentence-similarity
false
sentence-transformers
null
sentence-transformers/all-MiniLM-L12-v2
954,345
5
sentence-transformers
39
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: apache-2.0 datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - MS Marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - nat...
EleutherAI/gpt-j-6B
918ad376364058dee23512629bc385380c98e57d
2022-03-15T13:34:01.000Z
[ "pytorch", "tf", "jax", "gptj", "text-generation", "en", "dataset:The Pile", "arxiv:2104.09864", "arxiv:2101.00027", "transformers", "causal-lm", "license:apache-2.0" ]
text-generation
false
EleutherAI
null
EleutherAI/gpt-j-6B
945,885
243
transformers
40
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - The Pile --- # GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represent...
prithivida/parrot_paraphraser_on_T5
9f32aa1e456e8e8a90d97e8673365f3090fa49fa
2021-05-18T07:53:27.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
prithivida
null
prithivida/parrot_paraphraser_on_T5
870,393
20
transformers
41
# Parrot ## 1. What is Parrot? Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. For more details on the library and usage please refer to the [github page](https://github.com/PrithivirajDamodar...
openai/clip-vit-base-patch32
f4881ba48ee4d21b7ed5602603b9e3e92eb1b346
2022-03-14T17:58:13.000Z
[ "pytorch", "tf", "jax", "clip", "feature-extraction", "arxiv:2103.00020", "arxiv:1908.04913", "transformers", "vision" ]
feature-extraction
false
openai
null
openai/clip-vit-base-patch32
854,364
49
transformers
42
--- tags: - vision --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robu...
prajjwal1/bert-tiny
6f75de8b60a9f8a2fdf7b69cbd86d9e64bcb3837
2021-10-27T18:29:01.000Z
[ "pytorch", "en", "arxiv:1908.08962", "arxiv:2110.01518", "transformers", "BERT", "MNLI", "NLI", "transformer", "pre-training", "license:mit" ]
null
false
prajjwal1
null
prajjwal1/bert-tiny
799,875
9
transformers
43
--- language: - en license: - mit tags: - BERT - MNLI - NLI - transformer - pre-training --- The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller ...
Jean-Baptiste/camembert-ner-with-dates
8c2d77a331733d26e0ca95a8f525e0ca3aa8e909
2021-08-30T12:55:48.000Z
[ "pytorch", "camembert", "token-classification", "fr", "dataset:Jean-Baptiste/wikiner_fr", "transformers", "autotrain_compatible" ]
token-classification
false
Jean-Baptiste
null
Jean-Baptiste/camembert-ner-with-dates
782,295
8
transformers
44
--- language: fr datasets: - Jean-Baptiste/wikiner_fr widget: - text: "Je m'appelle jean-baptiste et j'habite à montréal depuis fevr 2012" --- # camembert-ner: model fine-tuned from camemBERT for NER task (including DATE tag). ## Introduction [camembert-ner-with-dates] is an extension of french camembert-ner model w...
bert-large-cased
d9238236d8326ce4bc117132bb3b7e62e95f3a9a
2021-05-18T16:33:16.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
bert-large-cased
778,414
3
transformers
45
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (cased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/g...
facebook/bart-large-cnn
9137060abd52495839d8c5c67ab4e6d0c49254b2
2022-07-28T15:16:55.000Z
[ "pytorch", "tf", "jax", "rust", "bart", "text2text-generation", "arxiv:1910.13461", "transformers", "summarization", "license:mit", "model-index", "autotrain_compatible" ]
summarization
false
facebook
null
facebook/bart-large-cnn
766,202
72
transformers
46
--- tags: - summarization license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png model-index: - name: facebook/bart-large-cnn results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: t...
unitary/toxic-bert
5cc53435803a6e6f1ac8e4b243910d3bf26803ff
2021-06-07T15:20:33.000Z
[ "pytorch", "jax", "bert", "text-classification", "arxiv:1703.04009", "arxiv:1905.12516", "transformers" ]
text-classification
false
unitary
null
unitary/toxic-bert
749,909
15
transformers
47
<div align="center"> **⚠️ Disclaimer:** The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.com/unitaryai/detoxify # 🙊 Detoxify...
cardiffnlp/twitter-roberta-base-sentiment
b636d90b2ed53d7ba6006cefd76f29cd354dd9da
2022-04-06T08:10:31.000Z
[ "pytorch", "tf", "jax", "roberta", "text-classification", "arxiv:2010.12421", "transformers" ]
text-classification
false
cardiffnlp
null
cardiffnlp/twitter-roberta-base-sentiment
734,700
57
transformers
48
# Twitter-roBERTa-base for Sentiment Analysis This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)). ...
mrm8488/t5-base-finetuned-question-generation-ap
7281097a2e51b1b57684b7de9999e32a0250dd83
2022-06-06T21:28:57.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:squad", "arxiv:1910.10683", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-question-generation-ap
717,961
26
transformers
49
--- language: en datasets: - squad widget: - text: "answer: Manuel context: Manuel has created RuPERTa-base with the support of HF-Transformers and Google" --- # T5-base fine-tuned on SQuAD for **Question Generation** [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tune...
google/bert_uncased_L-2_H-128_A-2
1ae49ff827beda5996998802695c4cac8e9932c6
2021-05-19T17:28:12.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-2_H-128_A-2
687,625
11
transformers
50
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with Word...
dslim/bert-base-NER
f7c2808a659015eeb8828f3f809a2f1be67a2446
2021-09-05T12:00:26.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "en", "dataset:conll2003", "arxiv:1810.04805", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
false
dslim
null
dslim/bert-base-NER
669,498
62
transformers
51
--- language: en datasets: - conll2003 license: mit --- # bert-base-NER ## Model description **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: locat...
uer/chinese_roberta_L-12_H-768
b082602ba4eba86f785a6b4e3310eccc394816ee
2022-07-15T08:16:22.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "dataset:CLUECorpusSmall", "arxiv:1909.05658", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
fill-mask
false
uer
null
uer/chinese_roberta_L-12_H-768
649,235
2
transformers
52
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese RoBERTa Miniatures ## Model description This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). [Turc e...
cl-tohoku/bert-base-japanese-whole-word-masking
ab68bf4a4d55e7772b1fbea6441bdab72aaf949c
2021-09-23T13:45:34.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ja", "dataset:wikipedia", "transformers", "license:cc-by-sa-4.0", "autotrain_compatible" ]
fill-mask
false
cl-tohoku
null
cl-tohoku/bert-base-japanese-whole-word-masking
632,322
15
transformers
53
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東北大学で[MASK]の研究をしています。 --- # BERT base Japanese (IPA dictionary, whole word masking enabled) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes in...
facebook/bart-base
84358834e73de6a82c22cec1d90eb45ef4f6eba5
2022-06-03T09:43:53.000Z
[ "pytorch", "tf", "jax", "bart", "feature-extraction", "en", "arxiv:1910.13461", "transformers", "license:apache-2.0" ]
feature-extraction
false
facebook
null
facebook/bart-base
624,921
18
transformers
54
--- license: apache-2.0 language: en --- # BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first...
digitalepidemiologylab/covid-twitter-bert
945b4ea68241df3ccb8554cd1927ba81d2c9ecaa
2021-05-19T15:52:48.000Z
[ "pytorch", "tf", "jax", "bert", "en", "transformers", "Twitter", "COVID-19", "license:mit" ]
null
false
digitalepidemiologylab
null
digitalepidemiologylab/covid-twitter-bert
608,689
null
transformers
55
--- language: "en" thumbnail: "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png" tags: - Twitter - COVID-19 license: mit --- # COVID-Twitter-BERT (CT-BERT) v1 :warning: _You may want to use the [v2 model](https://huggingface.co/digitalepidemiologyl...
microsoft/layoutlm-base-uncased
ca841ce8d2f46b13b0ac3f635b8eb7d2e1d758d5
2021-08-11T05:27:42.000Z
[ "pytorch", "tf", "layoutlm", "arxiv:1912.13318", "transformers" ]
null
false
microsoft
null
microsoft/layoutlm-base-uncased
604,081
8
transformers
56
# LayoutLM **Multimodal (text + layout/format + image) pre-training for document AI** [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://aka.ms/layoutlm) ## Model description LayoutLM is a simple but effective pre-training method of text and layout for document ...
xlnet-base-cased
593a21e8b79948a7f952811aa44f37d76e23d586
2021-09-16T09:43:58.000Z
[ "pytorch", "tf", "rust", "xlnet", "text-generation", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1906.08237", "transformers", "license:mit" ]
text-generation
false
null
null
xlnet-base-cased
599,543
5
transformers
57
--- language: en license: mit datasets: - bookcorpus - wikipedia --- # XLNet (base-sized model) XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in...
distilbert-base-multilingual-cased
6045845d9e2b056487062a98a902d8304d76441f
2022-07-22T08:13:03.000Z
[ "pytorch", "tf", "distilbert", "fill-mask", "multilingual", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
null
null
distilbert-base-multilingual-cased
585,365
16
transformers
58
--- language: multilingual license: apache-2.0 datasets: - wikipedia --- # Model Card for DistilBERT base multilingual (cased) # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation...
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
b8ef00830037f9868450f778081ea683e900fe39
2022-06-15T18:43:00.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "multilingual", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
584,527
43
sentence-transformers
59
--- pipeline_tag: sentence-similarity language: multilingual license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences &...
bhadresh-savani/distilbert-base-uncased-emotion
322caf2a56793969b8221b87bed988f8e7798b8e
2022-07-06T10:43:55.000Z
[ "pytorch", "tf", "jax", "distilbert", "text-classification", "en", "dataset:emotion", "arxiv:1910.01108", "transformers", "emotion", "license:apache-2.0", "model-index" ]
text-classification
false
bhadresh-savani
null
bhadresh-savani/distilbert-base-uncased-emotion
564,284
37
transformers
60
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score model-index: - name: bhadresh-savani/distilbert-base-uncased-emotion ...
sentence-transformers/bert-base-nli-mean-tokens
18fc720063106176044380e71bad038d01e821d1
2022-06-09T12:34:28.000Z
[ "pytorch", "tf", "jax", "rust", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/bert-base-nli-mean-tokens
528,903
9
sentence-transformers
61
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net ...
deepset/minilm-uncased-squad2
2f66fe86fb8a3df5b7b07c214a3d33b31d5a133c
2022-07-25T14:34:52.000Z
[ "pytorch", "jax", "bert", "question-answering", "en", "dataset:squad_v2", "transformers", "license:cc-by-4.0", "model-index", "autotrain_compatible" ]
question-answering
false
deepset
null
deepset/minilm-uncased-squad2
515,791
8
transformers
62
--- language: en datasets: - squad_v2 license: cc-by-4.0 model-index: - name: deepset/minilm-uncased-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Ex...
gpt2-medium
8c7ca69f9d24f64c9f3540f9c416d99e16275828
2022-07-22T08:01:16.000Z
[ "pytorch", "tf", "jax", "rust", "gpt2", "text-generation", "en", "arxiv:1910.09700", "transformers", "license:mit" ]
text-generation
false
null
null
gpt2-medium
515,318
4
transformers
63
--- language: en license: mit --- # GPT-2 Medium ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Envi...
pysentimiento/robertuito-sentiment-analysis
e3be95c8efad7f480ce8aab2221188ecb78e40f3
2022-06-23T13:01:10.000Z
[ "pytorch", "tf", "roberta", "text-classification", "es", "arxiv:2106.09462", "arxiv:2111.09453", "transformers", "twitter", "sentiment-analysis" ]
text-classification
false
pysentimiento
null
pysentimiento/robertuito-sentiment-analysis
506,297
9
transformers
64
--- language: - es tags: - twitter - sentiment-analysis --- # Sentiment Analysis in Spanish ## robertuito-sentiment-analysis Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dial...
Helsinki-NLP/opus-mt-fr-en
967b0840416a86ccf02573c8fedf9dd0e0b42fd6
2021-09-09T21:53:38.000Z
[ "pytorch", "jax", "marian", "text2text-generation", "fr", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-fr-en
490,737
5
transformers
65
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-en * source languages: fr * target languages: en * OPUS readme: [fr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * downl...
hfl/chinese-roberta-wwm-ext
5c58d0b8ec1d9014354d691c538661bf00bfdb44
2022-03-01T09:13:56.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
hfl
null
hfl/chinese-roberta-wwm-ext
485,950
51
transformers
66
--- language: - zh tags: - bert license: "apache-2.0" --- # Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word ...
google/electra-small-discriminator
153f486d928bcfc213932f8fc91fc2e3c41af769
2021-04-29T15:24:16.000Z
[ "pytorch", "tf", "jax", "electra", "pretraining", "en", "transformers", "license:apache-2.0" ]
null
false
google
null
google/electra-small-discriminator
482,240
5
transformers
67
--- language: en thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- ## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks usi...
microsoft/layoutlmv2-base-uncased
5c1ca07c23780c6dc123807def206ae9c4d59aca
2021-12-23T12:52:53.000Z
[ "pytorch", "layoutlmv2", "en", "arxiv:2012.14740", "transformers", "license:cc-by-nc-sa-4.0" ]
null
false
microsoft
null
microsoft/layoutlmv2-base-uncased
477,930
18
transformers
68
--- language: en license: cc-by-nc-sa-4.0 --- # LayoutLMv2 **Multimodal (text + layout/format + image) pre-training for document AI** The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutlmv2). [Microsoft Document AI](https://www.mi...
klue/bert-base
812449f1a6bc736e693db7aa0e513e5e90795a62
2021-10-20T15:23:59.000Z
[ "pytorch", "bert", "fill-mask", "ko", "arxiv:2105.09680", "transformers", "korean", "klue", "autotrain_compatible" ]
fill-mask
false
klue
null
klue/bert-base
461,579
7
transformers
69
--- language: ko tags: - korean - klue mask_token: "[MASK]" widget: - text: 대한민국의 수도는 [MASK] 입니다. --- # KLUE BERT base Pretrained BERT Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details. ## How to use ```python from tra...
sentence-transformers/distilbert-base-nli-mean-tokens
683b927b0b0f77e70b9a7d15f7f7601a515925a9
2022-06-15T19:35:42.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
feature-extraction
false
sentence-transformers
null
sentence-transformers/distilbert-base-nli-mean-tokens
454,847
null
sentence-transformers
70
--- pipeline_tag: feature-extraction license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net ...
sshleifer/distilbart-cnn-12-6
a4f8f3ea906ed274767e9906dbaede7531d660ff
2021-06-14T07:51:12.000Z
[ "pytorch", "jax", "rust", "bart", "text2text-generation", "en", "dataset:cnn_dailymail", "dataset:xsum", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
sshleifer
null
sshleifer/distilbart-cnn-12-6
452,231
57
transformers
71
--- language: en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail - xsum thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transforme...
SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune
cf3d414acf70f8f8e68108a2efde164b129e6bfa
2022-06-27T20:56:39.000Z
[ "pytorch", "tf", "jax", "t5", "feature-extraction", "arxiv:2104.02443", "arxiv:1910.09700", "arxiv:2105.09680", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune
443,061
5
transformers
72
--- tags: - summarization widget: - text: "you are given an array of numbers a and a number b , compute the difference of elements in a and b" --- # CodeTrans model for program synthesis ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ...
sentence-transformers/distiluse-base-multilingual-cased-v2
896fbacdabde59de4cb8d75dea7b9bff6066015c
2022-06-15T19:24:30.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "multilingual", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/distiluse-base-multilingual-cased-v2
437,878
18
sentence-transformers
73
--- pipeline_tag: sentence-similarity language: multilingual license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/distiluse-base-multilingual-cased-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & ...
sentence-transformers/paraphrase-xlm-r-multilingual-v1
50f7fa9e273db3db51beceacc1b111e4a1a31d34
2022-06-15T19:25:39.000Z
[ "pytorch", "tf", "xlm-roberta", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-xlm-r-multilingual-v1
434,789
31
sentence-transformers
74
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-xlm-r-multilingual-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensi...
camembert-base
3f452b6e5a89b0e6c828c9bba2642bc577086eae
2022-07-22T08:12:31.000Z
[ "pytorch", "tf", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
false
null
null
camembert-base
431,334
16
transformers
75
--- language: fr license: mit datasets: - oscar --- # CamemBERT: a Tasty French Language Model ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-...
nlptown/bert-base-multilingual-uncased-sentiment
e06857fdb0325a7798a8fc361b417dfeec3a3b98
2022-04-18T16:46:13.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "en", "nl", "de", "fr", "it", "es", "transformers", "license:mit" ]
text-classification
false
nlptown
null
nlptown/bert-base-multilingual-uncased-sentiment
429,449
57
transformers
76
--- language: - en - nl - de - fr - it - es license: mit --- # bert-base-multilingual-uncased-sentiment This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a n...
Hate-speech-CNERG/indic-abusive-allInOne-MuRIL
159b3636af636844106d203e3d8a07f522aaa6e0
2022-05-03T08:49:47.000Z
[ "pytorch", "bert", "text-classification", "bn", "hi", "hi-en", "ka-en", "ma-en", "mr", "ta-en", "ur", "ur-en", "en", "arxiv:2204.12543", "transformers", "license:afl-3.0" ]
text-classification
false
Hate-speech-CNERG
null
Hate-speech-CNERG/indic-abusive-allInOne-MuRIL
425,203
null
transformers
77
--- language: [bn, hi, hi-en, ka-en, ma-en, mr, ta-en, ur, ur-en, en] license: afl-3.0 --- This model is used detecting **abusive speech** in **Bengali, Devanagari Hindi, Code-mixed Hindi, Code-mixed Kannada, Code-mixed Malayalam, Marathi, Code-mixed Tamil, Urdu, Code-mixed Urdu, and English languages**. The allInOne ...
yiyanghkust/finbert-tone
69507fb7dad65fd5ee96679690e6336211edc7a5
2022-06-09T12:05:27.000Z
[ "pytorch", "tf", "text-classification", "en", "transformers", "financial-sentiment-analysis", "sentiment-analysis" ]
text-classification
false
yiyanghkust
null
yiyanghkust/finbert-tone
415,031
22
transformers
78
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "growth is strong and we have plenty of liquidity" --- `FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three ...
bert-large-uncased-whole-word-masking-finetuned-squad
242d9dbb66bb5033025196d5678907307f8fb098
2021-05-18T16:35:27.000Z
[ "pytorch", "tf", "jax", "bert", "question-answering", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
null
null
bert-large-uncased-whole-word-masking-finetuned-squad
413,010
23
transformers
79
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT large model (uncased) whole word masking finetuned on SQuAD Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released i...
j-hartmann/emotion-english-distilroberta-base
d23807173703d44b48d60ca252664f60d0d46563
2022-06-09T12:43:53.000Z
[ "pytorch", "tf", "roberta", "text-classification", "en", "transformers", "distilroberta", "sentiment", "emotion", "twitter", "reddit" ]
text-classification
false
j-hartmann
null
j-hartmann/emotion-english-distilroberta-base
406,862
31
transformers
80
--- language: "en" tags: - distilroberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- # Emotion English DistilRoBERTa-base # Description ℹ With this model, you can classify emotions in English text data....
sentence-transformers/multi-qa-mpnet-base-dot-v1
69cf9082c6abd4f70bdf8fca0ca826b6b5d16ebc
2022-07-11T21:02:59.000Z
[ "pytorch", "mpnet", "fill-mask", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:search_qa", "dataset:eli5", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/QQP", "dataset:embedding-...
sentence-similarity
false
sentence-transformers
null
sentence-transformers/multi-qa-mpnet-base-dot-v1
398,918
9
sentence-transformers
81
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity datasets: - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - search_qa - eli5 - natural_questions - trivia_qa - embedding-data/QQP - embedding-data/PAQ_pairs - embedding-d...
openai/clip-vit-large-patch14
0993c71e8ad62658387de2714a69f723ddfffacb
2022-03-14T18:01:04.000Z
[ "pytorch", "tf", "jax", "clip", "feature-extraction", "arxiv:2103.00020", "arxiv:1908.04913", "transformers", "vision" ]
feature-extraction
false
openai
null
openai/clip-vit-large-patch14
393,559
3
transformers
82
--- tags: - vision --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md). ## Model Details The CLIP model was developed by researchers at OpenAI to learn about what contributes to robu...
valhalla/distilbart-mnli-12-1
506336d4214470e3b3b36021358daae28e25ceac
2021-06-14T10:27:55.000Z
[ "pytorch", "jax", "bart", "text-classification", "dataset:mnli", "transformers", "distilbart", "distilbart-mnli", "zero-shot-classification" ]
zero-shot-classification
false
valhalla
null
valhalla/distilbart-mnli-12-1
389,752
10
transformers
83
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingfa...
dmis-lab/biosyn-sapbert-bc5cdr-disease
53d4525fccf15663f19f0d0846c50286a0a01f1e
2021-10-25T14:46:40.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
dmis-lab
null
dmis-lab/biosyn-sapbert-bc5cdr-disease
378,648
1
transformers
84
Entry not found
dmis-lab/biosyn-sapbert-bc5cdr-chemical
f9b9daf740698ac427bb6532fd456fc18bccdd80
2021-10-25T14:47:09.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
dmis-lab
null
dmis-lab/biosyn-sapbert-bc5cdr-chemical
373,119
null
transformers
85
Entry not found
allenai/scibert_scivocab_uncased
2ab156b969f2dbbd7ecc0080b78bc2cd272c4092
2021-05-19T11:41:40.000Z
[ "pytorch", "jax", "bert", "transformers" ]
null
false
allenai
null
allenai/scibert_scivocab_uncased
369,675
21
transformers
86
# SciBERT This is the pretrained model presented in [SciBERT: A Pretrained Language Model for Scientific Text](https://www.aclweb.org/anthology/D19-1371/), which is a BERT model trained on scientific text. The training corpus was papers taken from [Semantic Scholar](https://www.semanticscholar.org). Corpus size is 1....
hfl/chinese-bert-wwm-ext
2a995a880017c60e4683869e817130d8af548486
2021-05-19T19:06:39.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
hfl
null
hfl/chinese-bert-wwm-ext
368,889
26
transformers
87
--- language: - zh license: "apache-2.0" --- ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cu...
mrm8488/t5-base-finetuned-common_gen
5c3010b4532b7834039c65580e688e9656626835
2021-09-24T08:52:57.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:common_gen", "arxiv:1910.10683", "arxiv:1911.03705", "transformers", "common sense", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-common_gen
362,815
6
transformers
88
--- language: en tags: - common sense datasets: - common_gen widget: - text: "tree plant ground hole dig" --- # T5-base fine-tuned on CommonGen [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned on [CommonGen](https://inklab.usc.edu/CommonGen/index.html) for *Generati...
emilyalsentzer/Bio_ClinicalBERT
41943bf7f983007123c758373c5246305cc536ec
2022-02-27T13:59:10.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1904.03323", "arxiv:1901.08746", "transformers", "fill-mask", "license:mit" ]
fill-mask
false
emilyalsentzer
null
emilyalsentzer/Bio_ClinicalBERT
360,523
31
transformers
89
--- language: "en" tags: - fill-mask license: mit --- # ClinicalBERT - Bio + Clinical BERT Model The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + Pu...
dmis-lab/biosyn-sapbert-bc2gn
28ef41eace90e9aa6a9db372413c145883c72902
2022-02-25T13:32:53.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
dmis-lab
null
dmis-lab/biosyn-sapbert-bc2gn
358,818
null
transformers
90
hello
facebook/detr-resnet-50
272941311143979e4ade5424ede52fb5e84c9969
2022-06-27T08:29:51.000Z
[ "pytorch", "detr", "object-detection", "dataset:coco", "arxiv:2005.12872", "transformers", "vision", "license:apache-2.0" ]
object-detection
false
facebook
null
facebook/detr-resnet-50
355,674
48
transformers
91
--- license: apache-2.0 tags: - object-detection - vision datasets: - coco widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - s...
google/vit-base-patch16-224
5dca96d358b3fcb9d53b3d3881eb1ae20b6752d1
2022-06-23T07:42:10.000Z
[ "pytorch", "tf", "jax", "vit", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "transformers", "vision", "license:apache-2.0" ]
image-classification
false
google
null
google/vit-base-patch16-224
352,185
52
transformers
92
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teap...
sentence-transformers/paraphrase-mpnet-base-v2
18df4b22cd35517843308534d066190182ff39ef
2022-06-15T19:23:23.000Z
[ "pytorch", "tf", "mpnet", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/paraphrase-mpnet-base-v2
348,258
6
sentence-transformers
93
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional den...
cross-encoder/nli-distilroberta-base
99f096e70ef1fb038b8f0aecabc5a0f491684084
2021-08-05T08:40:59.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:multi_nli", "dataset:snli", "transformers", "distilroberta-base", "license:apache-2.0", "zero-shot-classification" ]
zero-shot-classification
false
cross-encoder
null
cross-encoder/nli-distilroberta-base
345,008
9
transformers
94
--- language: en pipeline_tag: zero-shot-classification tags: - distilroberta-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/example...
finiteautomata/bertweet-base-sentiment-analysis
cf6b0f60e84096e077c171fe3176093674370291
2022-06-23T13:01:55.000Z
[ "pytorch", "tf", "roberta", "text-classification", "en", "arxiv:2106.09462", "transformers", "sentiment-analysis" ]
text-classification
false
finiteautomata
null
finiteautomata/bertweet-base-sentiment-analysis
338,964
18
transformers
95
--- language: - en tags: - sentiment-analysis --- # Sentiment Analysis in English ## bertweet-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet...
distilbert-base-cased
8d708decd7afb7bec0af233e5338fe1fca3db705
2022-07-22T08:12:05.000Z
[ "pytorch", "tf", "distilbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "license:apache-2.0" ]
null
false
null
null
distilbert-base-cased
334,535
7
transformers
96
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia --- # Model Card for DistilBERT base model (cased) This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased). It was introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillat...
yjernite/retribert-base-uncased
aeab2b097862fa41e084db47e0e02229649bbe53
2021-03-10T02:54:37.000Z
[ "pytorch", "retribert", "feature-extraction", "transformers" ]
feature-extraction
false
yjernite
null
yjernite/retribert-base-uncased
332,598
null
transformers
97
Entry not found
emilyalsentzer/Bio_Discharge_Summary_BERT
affde836a50e4d333f15dae9270f5a856d59540b
2022-02-27T13:59:50.000Z
[ "pytorch", "jax", "bert", "en", "arxiv:1904.03323", "arxiv:1901.08746", "transformers", "fill-mask", "license:mit" ]
fill-mask
false
emilyalsentzer
null
emilyalsentzer/Bio_Discharge_Summary_BERT
328,763
8
transformers
98
--- language: "en" tags: - fill-mask license: mit --- # ClinicalBERT - Bio + Discharge Summary BERT Model The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base ...
dslim/bert-large-NER
95c62bc0d4109bd97d0578e5ff482e6b84c2b8b9
2022-06-27T20:58:09.000Z
[ "pytorch", "tf", "jax", "bert", "token-classification", "en", "dataset:conll2003", "arxiv:1810.04805", "transformers", "license:mit", "autotrain_compatible" ]
token-classification
false
dslim
null
dslim/bert-large-NER
327,366
10
transformers
99
--- language: en datasets: - conll2003 license: mit --- # bert-base-NER ## Model description **bert-large-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: loca...