Dynamic Model Routing and Cascading for Efficient LLM Inference: A Survey
Abstract
Dynamic routing systems for large language models adaptively select among multiple independently trained models based on query characteristics, requiring careful balance of competing objectives and operational constraints.
The rapid growth of large language models (LLMs) with diverse capabilities, costs, and domains has created a critical need for intelligent model selection at inference time. While smaller models suffice for routine queries, complex tasks demand more capable models. However, static model deployment does not account for the complexity and domain of incoming queries, leading to suboptimal performance and increased costs. Dynamic routing systems that adaptively select models based on query characteristics have emerged as a solution to this challenge. We provide a systematic analysis of state-of-the-art multi-LLM routing and cascading approaches. In contrast to mixture-of-experts architectures, which route within a single model, we study routing across multiple independently trained LLMs. We cover diverse routing paradigms, including query difficulty, human preferences, clustering, uncertainty quantification, reinforcement learning, multimodality, and cascading. For each paradigm, we analyze representative methods and examine key trade-offs. Beyond taxonomy, we introduce a conceptual framework that characterizes routing systems along three dimensions: when decisions are made, what information is used, and how they are computed. This perspective highlights that practical systems are often compositional, integrating multiple paradigms under operational constraints. Our analysis demonstrates that effective multi-LLM routing requires balancing competing objectives. Choosing the optimal routing strategy depends on deployment and computational constraints. Well-designed routing systems can outperform even the most powerful individual models by strategically leveraging specialized capabilities across models while maximizing efficiency gains. Meanwhile, open challenges remain in developing routing mechanisms that generalize across diverse architectures, modalities, and applications.
Community
Smaller models can handle most queries, but complex ones may require more capable (and more expensive) models. Multi-LLM routing and cascading systems address this challenge, and can outperform even the most powerful individual models in terms of both cost and quality.
Our new survey maps the state of the art in multi-LLM routing and cascading across six paradigms: difficulty-aware routing, human preference alignment, clustering, reinforcement learning, uncertainty quantification, and cascading. We also introduce a design framework for understanding routing decisions along three dimensions: when they are made, what signals they use, and how they are computed.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Models Under SCOPE: Scalable and Controllable Routing via Pre-hoc Reasoning (2026)
- Towards Fair and Comprehensive Evaluation of Routers in Collaborative LLM Systems (2026)
- GreenServ: Energy-Efficient Context-Aware Dynamic Routing for Multi-Model LLM Inference (2026)
- Sustainable LLM Inference using Context-Aware Model Switching (2026)
- RouteMoA: Dynamic Routing without Pre-Inference Boosts Efficient Mixture-of-Agents (2026)
- MMR-Bench: A Comprehensive Benchmark for Multimodal LLM Routing (2026)
- LLMRouterBench: A Massive Benchmark and Unified Framework for LLM Routing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper