whisper-tiny-it-multi-ct2-int8

CTranslate2 INT8 quantized version of LocalAI-io/whisper-tiny-it-multi for fast CPU inference.

Author: Ettore Di Giacinto

Brought to you by the LocalAI team. This model can be used directly with LocalAI.

Usage

faster-whisper

from faster_whisper import WhisperModel

model = WhisperModel("LocalAI-io/whisper-tiny-it-multi-ct2-int8", device="cpu", compute_type="int8")
segments, info = model.transcribe("audio.mp3", language="it")
for segment in segments:
    print(f"[{segment.start:.1f}s - {segment.end:.1f}s] {segment.text}")

WhisperX

import whisperx

model = whisperx.load_model("LocalAI-io/whisper-tiny-it-multi-ct2-int8", device="cpu", compute_type="int8")
result = model.transcribe("audio.mp3", language="it")

Links

Downloads last month
28
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LocalAI-io/whisper-tiny-it-multi-ct2-int8

Finetuned
(1799)
this model

Datasets used to train LocalAI-io/whisper-tiny-it-multi-ct2-int8