Code-Space Response Oracles: Generating Interpretable Multi-Agent Policies with Large Language Models
Abstract
Code-Space Response Oracles replace traditional neural network policies with human-readable code generated by large language models, enabling interpretable and explainable multi-agent reinforcement learning.
Recent advances in multi-agent reinforcement learning, particularly Policy-Space Response Oracles (PSRO), have enabled the computation of approximate game-theoretic equilibria in increasingly complex domains. However, these methods rely on deep reinforcement learning oracles that produce `black-box' neural network policies, making them difficult to interpret, trust or debug. We introduce Code-Space Response Oracles (CSRO), a novel framework that addresses this challenge by replacing RL oracles with Large Language Models (LLMs). CSRO reframes the best response computation as a code generation task, prompting an LLM to generate policies directly as human-readable code. This approach not only yields inherently interpretable policies but also leverages the LLM's pretrained knowledge to discover complex, human-like strategies. We explore multiple ways to construct and enhance an LLM-based oracle: zero-shot prompting, iterative refinement and AlphaEvolve, a distributed LLM-based evolutionary system. We demonstrate that CSRO achieves performance competitive with baselines while producing a diverse set of explainable policies. Our work presents a new perspective on multi-agent learning, shifting the focus from optimizing opaque policy parameters to synthesizing interpretable algorithmic behavior.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ReflexiCoder: Teaching Large Language Models to Self-Reflect on Generated Code and Self-Correct It via Reinforcement Learning (2026)
- MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation (2026)
- Game-Theoretic Co-Evolution for LLM-Based Heuristic Discovery (2026)
- Adaptive Confidence Gating in Multi-Agent Collaboration for Efficient and Optimized Code Generation (2026)
- ContextEvolve: Multi-Agent Context Compression for Systems Code Optimization (2026)
- World Models for Policy Refinement in StarCraft II (2026)
- Policy of Thoughts: Scaling LLM Reasoning via Test-time Policy Evolution (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper