Instructions to use LanguageBind/LanguageBind_Video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LanguageBind/LanguageBind_Video with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="LanguageBind/LanguageBind_Video") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoModelForZeroShotImageClassification model = AutoModelForZeroShotImageClassification.from_pretrained("LanguageBind/LanguageBind_Video", dtype="auto") - Notebooks
- Google Colab
- Kaggle
linbin commited on
Commit ·
499d15b
1
Parent(s): 8c989d2
Upload 2 files
Browse files- config.json +1 -1
- pytorch_model.bin +2 -2
config.json
CHANGED
|
@@ -88,7 +88,7 @@
|
|
| 88 |
"transformers_version": null,
|
| 89 |
"vision_config": {
|
| 90 |
"_name_or_path": "",
|
| 91 |
-
"lora_r":
|
| 92 |
"lora_alpha": 16,
|
| 93 |
"lora_dropout": 0.1,
|
| 94 |
"add_time_attn": true,
|
|
|
|
| 88 |
"transformers_version": null,
|
| 89 |
"vision_config": {
|
| 90 |
"_name_or_path": "",
|
| 91 |
+
"lora_r": 16,
|
| 92 |
"lora_alpha": 16,
|
| 93 |
"lora_dropout": 0.1,
|
| 94 |
"add_time_attn": true,
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fa6663eafe03922ba4b94eda8a18cd3e25276b9af4540e7c995fceb221a029b
|
| 3 |
+
size 2127449437
|