Papers
arxiv:2604.17730

MHSafeEval: Role-Aware Interaction-Level Evaluation of Mental Health Safety in Large Language Models

Published on Apr 20
Authors:
,
,
,

Abstract

A role-aware safety taxonomy and agent-based evaluation framework reveal cumulative safety failures in LLM counseling that static benchmarks miss.

AI-generated summary

Large language models (LLMs) are increasingly explored as scalable tools for mental health counseling, yet evaluating their safety remains challenging due to the interactional and context-dependent nature of clinical harm. Existing evaluation frameworks predominantly assess isolated responses using coarse-grained taxonomies or static datasets, limiting their ability to diagnose how harms emerge and accumulate over multi-turn counseling interactions. In this work, we introduce R-MHSafe, a role-aware mental health safety taxonomy that characterizes clinically significant harm in terms of the interactional roles an AI counselor adopts, including perpetrator, instigator, facilitator, or enabler, combined with clinically grounded harm categories. Then, we propose MHSafeEval, a closed-loop, agent-based evaluation framework that formulates safety assessment as trajectory-level discovery of harm through adversarial multi-turn interactions, guided by role-aware modeling. Using R-MHSafe and MHSafeEval, we conduct a large-scale evaluation across state-of-the-art LLMs. Our results reveal substantial role-dependent and cumulative safety failures that are systematically missed by existing static benchmarks, and show that our framework significantly improves failure-mode coverage and diagnostic granularity.

Community

Author here ๐Ÿ‘‹ Happy to answer any questions about MHSafeEval.

TL;DR: Existing LLM safety benchmarks assume adversarial users.
But in mental health counseling, the most vulnerable users aren't
trying to jailbreak the model โ€” they're reaching out for help.
We built R-MHSafe, the first role-aware taxonomy (7 harm categories
ร— 4 counselor roles: perpetrator/instigator/facilitator/enabler),
and MHSafeEval, an agent-based evaluation framework that measures
how harm accumulates over multi-turn conversations.

Key finding: Even frontier models fail systematically in the
Enabler role โ€” passively accepting harmful user framings without
correction. Across [N] state-of-the-art LLMs, we observe
cumulative safety failures that static single-turn benchmarks
miss entirely.

Paper: https://arxiv.org/abs/2604.17730
Code: https://github.com/suhyun565/MHSafeEval
Dataset: https://huggingface.co/datasets/Suhyunlee/MHSafeEval
Demo: https://huggingface.co/spaces/Suhyunlee/MHSafeEval-demo

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.17730
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.17730 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 1