whisper-tiny-it-multi-ct2-int8
CTranslate2 INT8 quantized version of LocalAI-io/whisper-tiny-it-multi for fast CPU inference.
Author: Ettore Di Giacinto
Brought to you by the LocalAI team. This model can be used directly with LocalAI.
Usage
faster-whisper
from faster_whisper import WhisperModel
model = WhisperModel("LocalAI-io/whisper-tiny-it-multi-ct2-int8", device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="it")
for segment in segments:
print(f"[{segment.start:.1f}s - {segment.end:.1f}s] {segment.text}")
WhisperX
import whisperx
model = whisperx.load_model("LocalAI-io/whisper-tiny-it-multi-ct2-int8", device="cpu", compute_type="int8")
result = model.transcribe("audio.mp3", language="it")
Links
- HF Safetensors: LocalAI-io/whisper-tiny-it-multi
- Code: github.com/localai-org/whisper-it
- LocalAI: github.com/mudler/LocalAI
- Downloads last month
- 28
Model tree for LocalAI-io/whisper-tiny-it-multi-ct2-int8
Base model
openai/whisper-tiny