modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags sequence | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings sequence |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/wav2vec2-large-xlsr-53-english | 2023-03-25T10:56:55.000Z | [
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_vo... | automatic-speech-recognition | jonatasgrosman | null | null | jonatasgrosman/wav2vec2-large-xlsr-53-english | 299 | 73,582,776 | transformers | 2022-03-02T23:29:05 | ---
language: en
datasets:
- common_voice
- mozilla-foundation/common_voice_6_0
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- en
- hf-asr-leaderboard
- mozilla-foundation/common_voice_6_0
- robust-speech-event
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 ... | 5,327 | [
[
-0.0232391357421875,
-0.048248291015625,
0.011932373046875,
0.0168914794921875,
-0.0070037841796875,
-0.018402099609375,
-0.0272369384765625,
-0.052276611328125,
0.0106964111328125,
0.0249176025390625,
-0.05072021484375,
-0.0323486328125,
-0.031036376953125,
... |
timm/mobilenetv3_large_100.ra_in1k | 2023-04-27T22:49:21.000Z | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1905.02244",
"license:apache-2.0",
"region:us",
"has_space"
] | image-classification | timm | null | null | timm/mobilenetv3_large_100.ra_in1k | 9 | 61,880,982 | timm | 2022-12-16T05:38:07 | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for mobilenetv3_large_100.ra_in1k
A MobileNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below.
Recipe details:
* RandAugment `RA` recipe. Inspi... | 4,793 | [
[
-0.0309295654296875,
-0.0209808349609375,
-0.004390716552734375,
0.006359100341796875,
-0.0230865478515625,
-0.0299835205078125,
-0.00531005859375,
-0.0260467529296875,
0.0279998779296875,
0.035003662109375,
-0.02734375,
-0.054229736328125,
-0.0435791015625,
... |
bert-base-uncased | 2023-06-30T01:42:19.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | fill-mask | null | null | null | bert-base-uncased | 1,182 | 52,250,055 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](http... | 10,517 | [
[
-0.010284423828125,
-0.046142578125,
0.0119476318359375,
0.023162841796875,
-0.0394287109375,
0.0003082752227783203,
-0.00923919677734375,
-0.0169677734375,
0.033599853515625,
0.041656494140625,
-0.04144287109375,
-0.03338623046875,
-0.0570068359375,
0.01056... |
distilbert-base-uncased-finetuned-sst-2-english | 2023-10-26T16:14:11.000Z | [
"transformers",
"pytorch",
"tf",
"rust",
"onnx",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | null | null | null | distilbert-base-uncased-finetuned-sst-2-english | 331 | 41,670,892 | transformers | 2022-03-02T23:29:04 | ---
language: en
license: apache-2.0
datasets:
- sst2
- glue
model-index:
- name: distilbert-base-uncased-finetuned-sst-2-english
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: sst2
split: validation
metrics:
... | 10,458 | [
[
-0.0304412841796875,
-0.05908203125,
0.0137481689453125,
0.012725830078125,
-0.032501220703125,
-0.0002455711364746094,
-0.01410675048828125,
-0.0252838134765625,
0.007808685302734375,
0.032745361328125,
-0.04638671875,
-0.04730224609375,
-0.0693359375,
-0.0... |
openai/clip-vit-large-patch14 | 2023-09-15T15:49:35.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"arxiv:2103.00020",
"arxiv:1908.04913",
"endpoints_compatible",
"has_space",
"region:us"
] | zero-shot-image-classification | openai | null | null | openai/clip-vit-large-patch14 | 676 | 26,212,915 | transformers | 2022-03-02T23:29:05 | ---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ... | 7,935 | [
[
-0.039031982421875,
-0.0443115234375,
0.0128173828125,
-0.0023288726806640625,
-0.01251983642578125,
-0.019561767578125,
0.001708984375,
-0.054962158203125,
0.0099334716796875,
0.0298919677734375,
-0.0217132568359375,
-0.03155517578125,
-0.048919677734375,
0... |
gpt2 | 2023-06-30T02:19:43.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"doi:10.57967/hf/0039",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text-generation | null | null | null | gpt2 | 1,471 | 23,269,709 | transformers | 2022-03-02T23:29:04 | ---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better... | 8,090 | [
[
-0.0205841064453125,
-0.055419921875,
0.0232086181640625,
-0.0022525787353515625,
-0.019683837890625,
-0.0235137939453125,
-0.030242919921875,
-0.03985595703125,
-0.00772857666015625,
0.023651123046875,
-0.0361328125,
-0.0206756591796875,
-0.055755615234375,
... |
tiiuae/falcon-7b-instruct | 2023-09-29T14:32:23.000Z | [
"transformers",
"pytorch",
"coreml",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"t... | text-generation | tiiuae | null | null | tiiuae/falcon-7b-instruct | 710 | 15,487,847 | transformers | 2023-04-25T06:21:01 | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"
example_title: "Abu Dhabi Trip"
- text: "What's the Everett interpretation of quantum mechanics?"
example_title: "Q/A: Quantum & Answers"
- text: "Giv... | 9,798 | [
[
-0.035675048828125,
-0.07257080078125,
0.005641937255859375,
0.02783203125,
-0.00731658935546875,
-0.007244110107421875,
-0.00921630859375,
-0.034698486328125,
0.01654052734375,
0.0285797119140625,
-0.0340576171875,
-0.036224365234375,
-0.056793212890625,
0.... |
xlm-roberta-base | 2023-04-07T12:46:17.000Z | [
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"xlm-roberta",
"fill-mask",
"exbert",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi... | fill-mask | null | null | null | xlm-roberta-base | 406 | 12,048,443 | transformers | 2022-03-02T23:29:04 | ---
tags:
- exbert
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
... | 5,238 | [
[
-0.03326416015625,
-0.056610107421875,
0.01509857177734375,
0.005535125732421875,
-0.015625,
-0.0003008842468261719,
-0.0286407470703125,
-0.029022216796875,
0.01404571533203125,
0.044036865234375,
-0.033782958984375,
-0.04351806640625,
-0.05340576171875,
0.... |
distilbert-base-uncased | 2023-08-18T14:59:41.000Z | ["transformers","pytorch","tf","jax","rust","safetensors","distilbert","fill-mask","exbert","en","da(...TRUNCATED) | fill-mask | null | null | null | distilbert-base-uncased | 292 | 11,014,465 | transformers | 2022-03-02T23:29:04 | "---\nlanguage: en\ntags:\n- exbert\nlicense: apache-2.0\ndatasets:\n- bookcorpus\n- wikipedia\n---\(...TRUNCATED) | 8,577 | [[-0.004299163818359375,-0.049346923828125,0.018951416015625,0.0210113525390625,-0.041534423828125,0(...TRUNCATED) |
sentence-transformers/all-mpnet-base-v2 | 2023-11-02T09:35:52.000Z | ["sentence-transformers","pytorch","mpnet","feature-extraction","sentence-similarity","en","dataset:(...TRUNCATED) | sentence-similarity | sentence-transformers | null | null | sentence-transformers/all-mpnet-base-v2 | 452 | 10,816,338 | sentence-transformers | 2022-03-02T23:29:05 | "---\npipeline_tag: sentence-similarity\ntags:\n- sentence-transformers\n- feature-extraction\n- sen(...TRUNCATED) | 10,571 | [[-0.0270233154296875,-0.0555419921875,0.0252685546875,0.01505279541015625,-0.00969696044921875,-0.0(...TRUNCATED) |
End of preview. Expand in Data Studio
- Downloads last month
- 2