mozilla-foundation/common_voice_13_0
Updated • 2.3k • 3
How to use EYEDOL/FROM_C3_NEW2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="EYEDOL/FROM_C3_NEW2") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("EYEDOL/FROM_C3_NEW2")
model = AutoModelForSpeechSeq2Seq.from_pretrained("EYEDOL/FROM_C3_NEW2")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("EYEDOL/FROM_C3_NEW2")
model = AutoModelForSpeechSeq2Seq.from_pretrained("EYEDOL/FROM_C3_NEW2")This model is a fine-tuned version of EYEDOL/FROM_C3_NEW1 on the Common Voice 13.0 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.0665 | 0.6918 | 2000 | 0.2171 | 16.7644 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="EYEDOL/FROM_C3_NEW2")