Expected Harm: Rethinking Safety Evaluation of (Mis)Aligned LLMs Paper • 2602.01600 • Published 6 days ago • 21
TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding Paper • 2502.19400 • Published Feb 26, 2025 • 47