YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
KAME
Links
- Paper: arXiv: 2510.02327 (ICASSP 2026)
- Inference code: SakanaAI/kame
- Finetuning code: SakanaAI/kame_finetune
Abstract
Real-time speech-to-speech (S2S) models excel at generating natural, low-latency conversational responses but often lack deep knowledge and semantic understanding. Conversely, cascaded systems combining automatic speech recognition, a text-based Large Language Model (LLM), and text-to-speech synthesis offer superior knowledge representation at the cost of high latency, which disrupts the flow of natural interaction. This paper introduces a novel hybrid architecture that bridges the gap between these two paradigms. Our framework processes user speech through an S2S transformer for immediate responsiveness while concurrently relaying the query to a powerful back-end LLM. The LLM’s text-based response is then injected in real time to guide the S2S model’s speech generation, effectively infusing its output with rich knowledge without the full latency penalty of a cascaded system. We evaluated our method using a speech-synthesized variant of the MT-Bench benchmark that consists of multi-turn question-answering sessions. The results demonstrate that our system substantially outperforms a baseline S2S model in response correctness, approaching that of a cascaded system, while maintaining a latency on par with the baseline.
Base Model
The front-end S2S model is based on Moshi: a speech-text foundation model for real-time dialogue, a full-duplex speech-to-speech foundation model for real-time dialogue. (arxiv.org)
- Downloads last month
- 5