Instructions to use Chars/DeepDanbooruClip with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Chars/DeepDanbooruClip with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-image-classification", model="Chars/DeepDanbooruClip") pipe( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/parrots.png", candidate_labels=["animals", "humans", "landscape"], )# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotImageClassification processor = AutoProcessor.from_pretrained("Chars/DeepDanbooruClip") model = AutoModelForZeroShotImageClassification.from_pretrained("Chars/DeepDanbooruClip") - Notebooks
- Google Colab
- Kaggle
metadata
tags:
- vision
widget:
- src: https://huggingface.co/Chars/DeepDanbooruClip/resolve/main/example.jpg
candidate_labels: Azur Lane, 3 girl with sword, cat ear, a dog
example_title: Azur Lane
- src: https://huggingface.co/Chars/DeepDanbooruClip/resolve/main/example2.jpg
candidate_labels: >-
1 girl with black hair, rabbit ear, big breasts, minato aqua,
fate/extra, k-on!, daiyousei, cirno
example_title: cirno & daiyousei