Instructions to use ThomasFG/20-10 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ThomasFG/20-10 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="ThomasFG/20-10")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("ThomasFG/20-10") model = AutoModelForSpeechSeq2Seq.from_pretrained("ThomasFG/20-10") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 7bfe880f344bb3c8454017ca67011ea6d12e029c01c5d24f35ea399b5aefe9fe
- Size of remote file:
- 4.73 kB
- SHA256:
- a25f45a417732ba04bc773db0463d3478be1178865de72b5850b83daf1b7e406
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.