Automatic Speech Recognition
Transformers
PyTorch
Swahili
whisper
hf-asr-leaderboard
Generated from Trainer
Eval Results (legacy)
Instructions to use hedronstone/whisper-medium-sw with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hedronstone/whisper-medium-sw with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="hedronstone/whisper-medium-sw")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("hedronstone/whisper-medium-sw") model = AutoModelForSpeechSeq2Seq.from_pretrained("hedronstone/whisper-medium-sw") - Notebooks
- Google Colab
- Kaggle
Model
- Name: Whisper Medium Swahili
- Description: Whisper weights for speech-to-text task, fine-tuned and evaluated on normalized data.
- Performance: 30.51 WER
Weights
- Date of release: 12.09.2022
- License: MIT
Usage
To use these weights in HuggingFace's transformers library, you can do the following:
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("hedronstone/whisper-small-sw")
- Downloads last month
- 8
Evaluation results
- Wer on Common Voice 11.0test set self-reported30.510