Instructions to use hr16/PhoWhisper-medium-flax with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hr16/PhoWhisper-medium-flax with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="hr16/PhoWhisper-medium-flax")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("hr16/PhoWhisper-medium-flax") model = AutoModelForSpeechSeq2Seq.from_pretrained("hr16/PhoWhisper-medium-flax") - Notebooks
- Google Colab
- Kaggle
Converted to Flax from vinai/PhoWhisper-medium for light-speed inference on TPU
from whisper_jax import FlaxWhisperPipline
import jax.numpy as jnp
pipeline = FlaxWhisperPipline("hr16/PhoWhisper-medium-flax", dtype=jnp.bfloat16, batch_size=16)
PhoWhisper: Automatic Speech Recognition for Vietnamese
We introduce PhoWhisper in five versions for Vietnamese automatic speech recognition. PhoWhisper's robustness is achieved through fine-tuning the multilingual Whisper on an 844-hour dataset that encompasses diverse Vietnamese accents. Our experimental study demonstrates state-of-the-art performances of PhoWhisper on benchmark Vietnamese ASR datasets. Please cite our PhoWhisper paper when it is used to help produce published results or is incorporated into other software:
@inproceedings{PhoWhisper,
title = {{PhoWhisper: Automatic Speech Recognition for Vietnamese}},
author = {Thanh-Thien Le and Linh The Nguyen and Dat Quoc Nguyen},
booktitle = {Proceedings of the ICLR 2024 Tiny Papers track},
year = {2024}
}
For further information or requests, please go to PhoWhisper's homepage!
- Downloads last month
- 11