modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
distilbert-base-uncased-finetuned-sst-2-english
[ "NEGATIVE", "POSITIVE" ]
--- language: en license: apache-2.0 datasets: - sst2 - glue model-index: - name: distilbert-base-uncased-finetuned-sst-2-english results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: sst2 split: validation metrics: ...
4,744
cross-encoder/ms-marco-MiniLM-L-12-v2
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
cardiffnlp/twitter-xlm-roberta-base-sentiment
[ "Negative", "Neutral", "Positive" ]
--- language: multilingual widget: - text: "🤗" - text: "T'estimo! ❤️" - text: "I love you!" - text: "I hate you 🤮" - text: "Mahal kita!" - text: "사랑해!" - text: "난 너가 싫어" - text: "😍😍😍" --- # twitter-XLM-roBERTa-base for Sentiment Analysis This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and ...
2,580
facebook/bart-large-mnli
[ "contradiction", "entailment", "neutral" ]
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png pipeline_tag: zero-shot-classification datasets: - multi_nli --- # bart-large-mnli This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/da...
3,793
ProsusAI/finbert
[ "positive", "negative", "neutral" ]
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "Stocks rallied and the British pound gained." --- FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financi...
1,475
tals/albert-xlarge-vitaminc-mnli
[ "NOT ENOUGH INFO", "REFUTES", "SUPPORTS" ]
--- language: python datasets: - fever - glue - multi_nli - tals/vitaminc --- # Details Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`). For more details see: https://github.com/TalSchuster/VitaminC When ...
2,369
daigo/bert-base-japanese-sentiment
[ "LABEL_0", "LABEL_1" ]
--- language: - ja --- binary classification # Usage ``` print(pipeline("sentiment-analysis",model="daigo/bert-base-japanese-sentiment",tokenizer="daigo/bert-base-japanese-sentiment")("私は幸福である。")) [{'label': 'ポジティブ', 'score': 0.98430425}] ```
246
cardiffnlp/twitter-roberta-base-sentiment
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
# Twitter-roBERTa-base for Sentiment Analysis This is a roBERTa-base model trained on ~58M tweets and finetuned for sentiment analysis with the TweetEval benchmark. This model is suitable for English (for a similar multilingual model, see [XLM-T](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)). ...
2,853
bhadresh-savani/distilbert-base-uncased-emotion
[ "anger", "fear", "joy", "love", "sadness", "surprise" ]
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - emotion - pytorch license: apache-2.0 datasets: - emotion metrics: - Accuracy, F1 Score model-index: - name: bhadresh-savani/distilbert-base-uncased-emotion ...
4,150
pysentimiento/robertuito-sentiment-analysis
[ "NEG", "NEU", "POS" ]
--- language: - es tags: - twitter - sentiment-analysis --- # Sentiment Analysis in Spanish ## robertuito-sentiment-analysis Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dial...
2,152
yiyanghkust/finbert-tone
[ "Positive", "Negative", "Neutral" ]
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "growth is strong and we have plenty of liquidity" --- `FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three ...
1,867
unitary/toxic-bert
[ "toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate" ]
<div align="center"> **⚠️ Disclaimer:** The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.com/unitaryai/detoxify # 🙊 Detoxify...
11,071
nlptown/bert-base-multilingual-uncased-sentiment
[ "1 star", "2 stars", "3 stars", "4 stars", "5 stars" ]
--- language: - en - nl - de - fr - it - es license: mit --- # bert-base-multilingual-uncased-sentiment This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a n...
1,950
finiteautomata/bertweet-base-sentiment-analysis
[ "NEG", "NEU", "POS" ]
--- language: - en tags: - sentiment-analysis --- # Sentiment Analysis in English ## bertweet-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet...
1,213
j-hartmann/emotion-english-distilroberta-base
[ "anger", "disgust", "fear", "joy", "neutral", "sadness", "surprise" ]
--- language: "en" tags: - distilroberta - sentiment - emotion - twitter - reddit widget: - text: "Oh wow. I didn't know that." - text: "This movie always makes me cry.." - text: "Oh Happy Day" --- # Emotion English DistilRoBERTa-base # Description ℹ With this model, you can classify emotions in English text data....
4,027
cross-encoder/ms-marco-TinyBERT-L-2
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
cross-encoder/nli-distilroberta-base
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - distilroberta-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/example...
2,589
Hate-speech-CNERG/indic-abusive-allInOne-MuRIL
[ "Normal", "Abusive" ]
--- language: [bn, hi, hi-en, ka-en, ma-en, mr, ta-en, ur, ur-en, en] license: afl-3.0 --- This model is used detecting **abusive speech** in **Bengali, Devanagari Hindi, Code-mixed Hindi, Code-mixed Kannada, Code-mixed Malayalam, Marathi, Code-mixed Tamil, Urdu, Code-mixed Urdu, and English languages**. The allInOne ...
1,263
valhalla/distilbart-mnli-12-1
[ "contradiction", "entailment", "neutral" ]
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingfa...
2,406
echarlaix/bert-base-uncased-sst2-acc91.1-d37-hybrid
null
--- language: en license: apache-2.0 tags: - text-classification datasets: - sst2 metrics: - accuracy --- ## bert-base-uncased model fine-tuned on SST-2 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the linear layers contains **37%** of the original weights....
2,945
cardiffnlp/twitter-roberta-base-sentiment-latest
[ "Negative", "Neutral", "Positive" ]
--- language: english widget: - text: "Covid cases are increasing fast!" --- # Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2021) This is a roBERTa-base model trained on ~124M tweets from January 2018 to December 2021 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m)), and finet...
2,723
oliverguhr/german-sentiment-bert
[ "positive", "negative", "neutral" ]
--- language: - de tags: - sentiment - bert license: mit widget: - text: "Das ist gar nicht mal so schlecht" metrics: - f1 --- # German Sentiment Classification with Bert This model was trained for sentiment classification of German language texts. To achieve the best results all model inputs needs to be preprocesse...
3,698
finiteautomata/beto-sentiment-analysis
[ "NEG", "NEU", "POS" ]
--- language: - es tags: - sentiment-analysis --- # Sentiment Analysis in Spanish ## beto-sentiment-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/pysentimiento/pysentimiento/) Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Ba...
1,213
cross-encoder/ms-marco-MiniLM-L-6-v2
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
bvanaken/clinical-assertion-negation-bert
[ "PRESENT", "ABSENT", "POSSIBLE" ]
--- language: "en" tags: - bert - medical - clinical - assertion - negation - text-classification widget: - text: "Patient denies [entity] SOB [entity]." --- # Clinical Assertion / Negation Classification BERT ## Model description The Clinical Assertion and Negation Classification BERT is introduced in the paper [A...
2,503
BaptisteDoyen/camembert-base-xnli
[ "entailment", "neutral", "contradiction" ]
--- language: - fr thumbnail: tags: - zero-shot-classification - xnli - nli - fr license: mit pipeline_tag: zero-shot-classification datasets: - xnli metrics: - accuracy --- # camembert-base-xnli ## Model description Camembert-base model fine-tuned on french part of XNLI dataset. <br> One of the few Zero-Shot c...
2,988
cross-encoder/ms-marco-MiniLM-L-2-v2
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
joeddav/xlm-roberta-large-xnli
[ "contradiction", "entailment", "neutral" ]
--- language: multilingual tags: - text-classification - pytorch - tensorflow datasets: - multi_nli - xnli license: mit pipeline_tag: zero-shot-classification widget: - text: "За кого вы голосуете в 2020 году?" candidate_labels: "politique étrangère, Europe, élections, affaires, politique" multi_class: true - text:...
4,951
hf-internal-testing/tiny-random-distilbert
null
--- pipeline_tag: text-classification ---
42
Sahajtomar/German_Zeroshot
[ "entailment", "neutral", "contradiction" ]
--- language: multilingual tags: - text-classification - pytorch - nli - xnli - de datasets: - xnli pipeline_tag: zero-shot-classification widget: - text: "Letzte Woche gab es einen Selbstmord in einer nahe gelegenen kolonie" candidate_labels: "Verbrechen,Tragödie,Stehlen" hypothesis_template: "In deisem geht es u...
1,711
cardiffnlp/twitter-roberta-base-emotion
[ "joy", "optimism", "anger", "sadness" ]
# Twitter-roBERTa-base for Emotion Recognition This is a roBERTa-base model trained on ~58M tweets and finetuned for emotion recognition with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://g...
2,412
cross-encoder/stsb-roberta-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset]...
941
cross-encoder/stsb-roberta-large
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset]...
941
cmarkea/distilcamembert-base-sentiment
[ "1 star", "2 stars", "3 stars", "4 stars", "5 stars" ]
--- language: fr license: mit datasets: - amazon_reviews_multi - allocine widget: - text: "Je pensais lire un livre nul, mais finalement je l'ai trouvé super !" - text: "Cette banque est très bien, mais elle n'offre pas les services de paiements sans contact." - text: "Cette banque est très bien et elle offre en plus l...
6,683
microsoft/MiniLM-L12-H384-uncased
null
--- thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- ## MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation MiniLM is a distilled model from the paper "[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression o...
2,015
typeform/distilbert-base-uncased-mnli
[ "ENTAILMENT", "NEUTRAL", "CONTRADICTION" ]
--- language: en pipeline_tag: zero-shot-classification tags: - distilbert datasets: - multi_nli metrics: - accuracy --- # DistilBERT base model (uncased) ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitat...
3,882
cardiffnlp/twitter-roberta-base-stance-climate
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
0
cardiffnlp/twitter-roberta-base-irony
null
# Twitter-roBERTa-base for Irony Detection This is a roBERTa-base model trained on ~58M tweets and finetuned for irony detection with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval official repository](https://github.co...
2,396
Narsil/deberta-large-mnli-zero-cls
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit pipeline_tag: zero-shot-classification --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBE...
3,888
typeform/mobilebert-uncased-mnli
[ "ENTAILMENT", "NEUTRAL", "CONTRADICTION" ]
--- language: en pipeline_tag: zero-shot-classification tags: - mobilebert datasets: - multi_nli metrics: - accuracy --- # MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices This model is the Multi-Genre Natural Language Inference (MNLI) fine-turned version of the [uncased MobileBERT model](https:/...
363
roberta-large-mnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: - en license: mit tags: - autogenerated-modelcard datasets: - multi_nli - wikipedia - bookcorpus --- # roberta-large-mnli ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#ri...
10,712
bhadresh-savani/bert-base-go-emotion
[ "admiration", "amusement", "anger", "annoyance", "approval", "caring", "confusion", "curiosity", "desire", "disappointment", "disapproval", "disgust", "embarrassment", "excitement", "fear", "gratitude", "grief", "joy", "love", "nervousness", "neutral", "optimism", "pride"...
--- language: - en thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4 tags: - text-classification - go-emotion - pytorch license: apache-2.0 datasets: - go_emotions metrics: - Accuracy --- # Bert-Base-Uncased-Go-Emotion ## Model description: ## Training ...
884
cross-encoder/quora-distilroberta-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [Quora Duplicate Questi...
1,070
unitary/multilingual-toxic-xlm-roberta
[ "toxic" ]
--- pipeline_tag: "text-classification" --- <div align="center"> **⚠️ Disclaimer:** The huggingface models currently give different results to the detoxify library (see issue [here](https://github.com/unitaryai/detoxify/issues/15)). For the most up to date models we recommend using the models from https://github.c...
11,107
valhalla/distilbart-mnli-12-3
[ "contradiction", "entailment", "neutral" ]
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingfa...
2,406
MoritzLaurer/mDeBERTa-v3-base-mnli-xnli
[ "contradiction", "entailment", "neutral" ]
--- language: - multilingual - en - ar - bg - de - el - es - fr - hi - ru - sw - th - tr - ur - vu - zh tags: - zero-shot-classification - text-classification - nli - pytorch metrics: - accuracy datasets: - multi_nli - xnli pipeline_tag: zero-shot-classification widget: - text: "Angela Merkel ist eine P...
5,597
Tatyana/rubert-base-cased-sentiment-new
[ "NEGATIVE", "NEUTRAL", "POSITIVE" ]
--- language: - ru tags: - sentiment - text-classification datasets: - Tatyana/ru_sentiment_dataset --- # RuBERT for Sentiment Analysis Russian texts sentiment classification. Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset) ## Labels meaning 0: NEUTRA...
1,000
siebert/sentiment-roberta-large-english
[ "NEGATIVE", "POSITIVE" ]
--- language: "en" tags: - sentiment - twitter - reviews - siebert --- ## SiEBERT - English-Language Sentiment Classification # Overview This model ("SiEBERT", prefix for "Sentiment in English") is a fine-tuned checkpoint of [RoBERTa-large](https://huggingface.co/roberta-large) ([Liu et al. 2019](https://arxiv.org/pd...
5,016
joeddav/bart-large-mnli-yahoo-answers
[ "contradiction", "entailment", "neutral" ]
--- language: en tags: - text-classification - pytorch datasets: - yahoo-answers pipeline_tag: zero-shot-classification --- # bart-lage-mnli-yahoo-answers ## Model Description This model takes [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) and fine-tunes it on Yahoo Answers topic classif...
4,276
ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli
[ "entailment", "neutral", "contradiction" ]
Entry not found
15
cardiffnlp/twitter-roberta-base-offensive
null
# Twitter-roBERTa-base for Offensive Language Identification This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark. - Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf). - Git Repo: [Tweeteval of...
2,401
cross-encoder/ms-marco-electra-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
EColi/SB_Classifier
[ "INTERACTION", "NONE", "SELFPROMO", "SPONSOR" ]
--- tags: - text-classification - generic library_name: generic widget: - text: 'This video is sponsored by squarespace' example_title: Sponsor - text: 'Check out the merch at linustechtips.com' example_title: Unpaid/self promotion - text: "Don't forget to like, comment and subscribe" example_title: Interaction r...
443
MilaNLProc/feel-it-italian-sentiment
[ "negative", "positive" ]
--- language: it license: mit tags: - sentiment - Italian --- # FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it is meant to be a...
3,551
cross-encoder/ms-marco-TinyBERT-L-2-v2
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch)....
3,233
textattack/bert-base-uncased-imdb
null
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the imdb dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a cla...
612
MilaNLProc/feel-it-italian-emotion
[ "anger", "fear", "joy", "sadness" ]
--- language: it license: mit tags: - sentiment - emotion - Italian --- # FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it...
3,493
microsoft/xtremedistil-l6-h256-uncased
null
--- language: en thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- # XtremeDistilTransformers for Distilling Massive Neural Networks XtremeDistilTransformers is a distilled task-agnostic transformer model that leverages task transfer for learning a small uni...
2,944
valhalla/distilbart-mnli-12-6
[ "contradiction", "entailment", "neutral" ]
--- datasets: - mnli tags: - distilbart - distilbart-mnli pipeline_tag: zero-shot-classification --- # DistilBart-MNLI distilbart-mnli is the distilled version of bart-large-mnli created using the **No Teacher Distillation** technique proposed for BART summarisation by Huggingface, [here](https://github.com/huggingfa...
2,406
lewtun/roberta-base-bne-finetuned-amazon_reviews_multi
null
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model_index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: a...
1,751
aatmasidha/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, t...
1,502
IDEA-CCNL/Erlangshen-Roberta-330M-Similarity
null
--- language: - zh license: apache-2.0 tags: - bert - NLU - NLI inference: true widget: - text: "今天心情不好[SEP]今天很开心" --- # Erlangshen-Roberta-330M-Similarity, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 20 paraphrace datasets in the Chinese domain for f...
1,624
vicgalle/xlm-roberta-large-xnli-anli
[ "contradiction", "entailment", "neutral" ]
--- language: multilingual tags: - zero-shot-classification - nli - pytorch datasets: - mnli - xnli - anli license: mit pipeline_tag: zero-shot-classification widget: - text: "De pugna erat fantastic. Nam Crixo decem quam dilexit et praeciderunt caput aemulus." candidate_labels: "violent, peaceful" - text: "La pelícu...
1,751
prithivida/parrot_adequacy_model
[ "contradiction", "entailment", "neutral" ]
--- license: apache-2.0 --- Parrot THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER 1. What is Parrot? Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the GitHub page or The mo...
364
textattack/bert-base-uncased-ag-news
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
## TextAttack Model CardThis `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the ag_news dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 128. Since this was a c...
625
prithivida/parrot_fluency_model
null
--- license: apache-2.0 --- Parrot THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER 1. What is Parrot? Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. Please refer to the GitHub page or The mo...
364
cross-encoder/qnli-electra-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data Given a question and paragraph, can the question be a...
1,996
Recognai/bert-base-spanish-wwm-cased-xnli
[ "contradiction", "neutral", "entailment" ]
--- language: es tags: - zero-shot-classification - nli - pytorch datasets: - xnli license: mit pipeline_tag: zero-shot-classification widget: - text: "El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo" candidate_labels: "cultura, sociedad, economia, salud, deportes" --- # bert-bas...
2,081
chkla/roberta-argument
[ "NON-ARGUMENT", "ARGUMENT" ]
--- language: english widget: - text: "It has been determined that the amount of greenhouse gases have decreased by almost half because of the prevalence in the utilization of nuclear power." --- ### Welcome to RoBERTArg! 🤖 **Model description** This model was trained on ~25k heterogeneous manually annotated senten...
2,523
textattack/bert-base-uncased-yelp-polarity
null
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 5e-05, and a maximum sequence length of 256. Since this ...
632
textattack/bert-base-uncased-CoLA
null
Entry not found
15
textattack/roberta-base-SST-2
null
Entry not found
15
pysentimiento/robertuito-emotion-analysis
[ "anger", "disgust", "fear", "joy", "others", "sadness", "surprise" ]
--- language: - es tags: - emotion-analysis - twitter --- # Emotion Analysis in Spanish ## robertuito-emotion-analysis Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with TASS 2020 Task 2 corpus for Emotion detection in Spanish...
2,600
pin/senda
[ "negativ", "neutral", "positiv" ]
--- language: da tags: - danish - bert - sentiment - polarity license: cc-by-4.0 widget: - text: "Sikke en dejlig dag det er i dag" --- # Danish BERT fine-tuned for Sentiment Analysis with `senda` This model detects polarity ('positive', 'neutral', 'negative') of Danish texts. It is trained and tested on Tweets anno...
1,782
savasy/bert-base-turkish-sentiment-cased
[ "negative", "positive" ]
--- language: tr --- # Bert-base Turkish Sentiment Model https://huggingface.co/savasy/bert-base-turkish-sentiment-cased This model is used for Sentiment Analysis, which is based on BERTurk for Turkish Language https://huggingface.co/dbmdz/bert-base-turkish-cased ## Dataset The dataset is taken from the studies [[...
5,000
textattack/albert-base-v2-yelp-polarity
null
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the yelp_polarity dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 3e-05, and a maximum sequence length of 512. Since this was...
628
iarfmoose/bert-base-cased-qa-evaluator
null
# BERT-base-cased-qa-evaluator This model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained [BERT-base-cased](https://huggingface.co/bert-base-cased) with a sequence classification head...
1,644
pysentimiento/robertuito-hate-speech
[ "aggressive", "hateful", "targeted" ]
--- language: - es tags: - twitter - hate-speech --- # Hate Speech detection in Spanish ## robertuito-hate-speech Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with SemEval 2019 Task 5: HatEval (SubTask B) corpus for Hate Speec...
2,868
j-hartmann/sentiment-roberta-large-english-3-classes
[ "negative", "neutral", "positive" ]
--- language: "en" tags: - roberta - sentiment - twitter widget: - text: "Oh no. This is bad.." - text: "To be or not to be." - text: "Oh Happy Day" --- This RoBERTa-based model can classify the sentiment of English language text in 3 classes: - positive 😀 - neutral 😐 - negative 🙁 The model was fine-tuned on 5,...
1,355
cross-encoder/nli-deberta-v3-base
[ "contradiction", "entailment", "neutral" ]
--- language: en pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-base datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/...
2,777
shahrukhx01/bert-mini-finetune-question-detection
null
--- language: "en" tags: - neural-search-query-classification - neural-search widget: - text: "keyword query." --- # KEYWORD QUERY VS STATEMENT/QUESTION CLASSIFIER FOR NEURAL SEARCH | Train Loss | Validation Acc.| Test Acc.| | ------------- |:-------------: | -----: | | 0.000806 | 0.99 | 0.997 | ```pyth...
1,349
Narasimha/hinglish-distilbert
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- license: mit ---
21
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
[ "contradiction", "entailment", "neutral" ]
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy datasets: - multi_nli - anli - fever pipeline_tag: zero-shot-classification --- # DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, wh...
3,545
mrm8488/bert-tiny-finetuned-sms-spam-detection
null
--- language: en tags: - sms - spam - detection datasets: - sms_spam widget: - text: "Camera - You are awarded a SiPix Digital Camera! call 09061221066 fromm landline. Delivery within 28 days." --- # BERT-Tiny fine-tuned on on sms_spam dataset for spam detection Validation accuray: **0.98**
293
microsoft/deberta-large-mnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit widget: - text: "[CLS] I love you. [SEP] I like you. [SEP]" --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) impro...
3,907
aliosm/sha3bor-footer-51-arabertv02-base
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
--- language: ar license: mit widget: - text: "إن العيون التي في طرفها حور" - text: "إذا ما فعلت الخير ضوعف شرهم" - text: "واحر قلباه ممن قلبه شبم" ---
152
aliosm/sha3bor-rhyme-detector-arabertv02-base
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_27", "LABEL_28", "LABEL_29",...
--- language: ar license: mit widget: - text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا" - text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح" - text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم" ---
245
aliosm/sha3bor-metre-detector-arabertv02-base
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- language: ar license: mit widget: - text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا" - text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح" - text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم" ---
245
shahrukhx01/question-vs-statement-classifier
null
--- language: "en" tags: - neural-search-query-classification - neural-search widget: - text: "what did you eat in lunch?" --- # KEYWORD STATEMENT VS QUESTION CLASSIFIER FOR NEURAL SEARCH ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(...
664
LiYuan/amazon-review-sentiment-analysis
[ "1 star", "2 stars", "3 stars", "4 stars", "5 stars" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and ...
3,515
ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli
[ "entailment", "neutral", "contradiction" ]
--- datasets: - snli - anli - multi_nli - multi_nli_mismatch - fever license: mit --- This is a strong pre-trained RoBERTa-Large NLI model. The training data is a combination of well-known NLI datasets: [`SNLI`](https://nlp.stanford.edu/projects/snli/), [`MNLI`](https://cims.nyu.edu/~sbowman/multinli/), [`FEVER-NLI`...
3,406
howey/bert-base-uncased-sst2
null
Entry not found
15
textattack/bert-base-uncased-SST-2
null
Entry not found
15
textattack/roberta-base-CoLA
null
## TextAttack Model Cardand the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score t...
528
finiteautomata/bertweet-base-emotion-analysis
[ "anger", "disgust", "fear", "joy", "others", "sadness", "surprise" ]
--- language: - en tags: - emotion-analysis --- # Emotion Analysis in English ## bertweet-base-emotion-analysis Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with EmoEvent corpus for Emotion detection in English. Base model is [B...
1,547
sbcBI/sentiment_analysis_model
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- language: en tags: - exbert license: apache-2.0 datasets: - Confidential --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github....
2,215
microsoft/Multilingual-MiniLM-L12-H384
null
--- thumbnail: https://huggingface.co/front/thumbnails/microsoft.png tags: - text-classification license: mit --- ## MiniLM: Small and Fast Pre-trained Models for Language Understanding and Generation MiniLM is a distilled model from the paper "[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression o...
6,174
cross-encoder/stsb-distilroberta-base
[ "LABEL_0" ]
--- license: apache-2.0 --- # Cross-Encoder for Quora Duplicate Questions Detection This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. ## Training Data This model was trained on the [STS benchmark dataset]...
941
sahri/indonesiasentiment
[ "negative", "neutral", "positive" ]
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "tidak jelek tapi keren" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoB...
2,811
microsoft/deberta-xlarge-mnli
[ "CONTRADICTION", "NEUTRAL", "ENTAILMENT" ]
--- language: en tags: - deberta-v1 - deberta-mnli tasks: mnli thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit widget: - text: "[CLS] I love you. [SEP] I like you. [SEP]" --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) impro...
3,909