Papers
arxiv:2604.16593

Revisiting a Pain in the Neck: A Semantic Reasoning Benchmark for Language Models

Published on Apr 17
· Submitted by
Yang
on Apr 21
Authors:
,
,
,

Abstract

SemanticQA evaluates language models on semantic phrase processing tasks, revealing significant performance variations in reasoning and comprehension across different phrase types and model architectures.

AI-generated summary

We present SemanticQA, an evaluation suite designed to assess language models (LMs) in semantic phrase processing tasks. The benchmark consolidates existing multiword expression (MwE) resources and reorganizes them into a unified testbed. It covers both general lexical phenomena, such as lexical collocations, and three fine-grained categories: idiomatic expressions, noun compounds, and verbal constructions. Through SemanticQA, we assess LMs of diverse architectures and scales in extraction, classification, and interpretation tasks, as well as sequential task compositions. We reveal substantial performance variation, particularly on tasks requiring semantic reasoning, highlighting differences in reasoning efficacy and semantic understanding of LMs, providing insights for pushing LMs with stronger comprehension on non-trivial semantic phrases. The evaluation harness and data of SemanticQA are available at https://github.com/jacklanda/SemanticQA.

Community

Paper author Paper submitter

CFBR

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.16593 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.16593 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.16593 in a Space README.md to link it from this page.

Collections including this paper 2