Instructions to use patrickvonplaten/data2vec-base-960h with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use patrickvonplaten/data2vec-base-960h with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="patrickvonplaten/data2vec-base-960h")# Load model directly from transformers import AutoTokenizer, AutoModelForCTC tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/data2vec-base-960h") model = AutoModelForCTC.from_pretrained("patrickvonplaten/data2vec-base-960h") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 2af331e4ed33b3eaeee1473bddd881dfbe5b229c11842544ee19388a32897fe5
- Size of remote file:
- 373 MB
- SHA256:
- 54e32bc715f6be66381ce3bc193c9eda5a21fb73e309b1f4fc478310c96120bc
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.