Mind's Eye: A Benchmark of Visual Abstraction, Transformation and Composition for Multimodal LLMs
Abstract
Multimodal large language models demonstrate significant limitations in visuospatial reasoning tasks compared to human performance, revealing deficiencies in visual attention, perceptual manipulation, and conceptual abstraction.
Multimodal large language models (MLLMs) have achieved impressive progress on vision language benchmarks, yet their capacity for visual cognitive and visuospatial reasoning remains less understood. We introduce "Mind's Eye", a multiple-choice benchmark of eight visuo-cognitive tasks inspired by classic human intelligence tests and organized under a novel "A-R-T" taxonomy: Abstraction, Relation, and Transformation. The tasks probe core processes of fluid intelligence such as pattern induction, analogical relation mapping, and mental transformation. We evaluate a diverse suite of closed-source and open-source MLLMs and compare their performance with human participants. Humans achieve 80% accuracy, while top performing MLLMs remain below 50%. Error analysis reveals failures in: (i) visual attention allocation, (ii) internal perceptual manipulation, and (iii) weak abstraction of underlying visual concepts. Our findings suggest that current MLLMs exhibit limited visuospatial reasoning capabilities, when compared with human participants, highlighting the need for more cognitively grounded evaluation frameworks.
Community
Mind's Eye introduces a benchmark of eight visuo-cognitive tasks organized under an Abstraction-Relation-Transformation taxonomy, drawing from classic human intelligence tests to probe fluid intelligence in multimodal LLMs. With humans reaching 80% accuracy while top MLLMs stay below 50%, the work highlights significant gaps in visual attention, perceptual manipulation, and concept abstraction, suggesting current models still fall well short of human-level visuospatial reasoning.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Do Vision-Language Models Truly Perform Vision Reasoning? A Rigorous Study of the Modality Gap (2026)
- VisDoT : Enhancing Visual Reasoning through Human-Like Interpretation Grounding and Decomposition of Thought (2026)
- Vision Language Models Cannot Reason About Physical Transformation (2026)
- From Human Cognition to Neural Activations: Probing the Computational Primitives of Spatial Reasoning in LLMs (2026)
- Deeper Thought, Weaker Aim: Understanding and Mitigating Perceptual Impairment during Reasoning in Multimodal Large Language Models (2026)
- Hidden Meanings in Plain Sight: RebusBench for Evaluating Cognitive Visual Reasoning (2026)
- Chain-of-Thought Degrades Visual Spatial Reasoning Capabilities of Multimodal LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.16054 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper