...

Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark


View a PDF of the paper titled All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark, by Davide Testa and 7 other authors

View PDF
HTML (experimental)

Abstract:We introduce MAIA (Multimodal AI Assessment), a native-Italian benchmark designed for fine-grained investigation of the reasoning abilities of visual language models on videos. MAIA differs from other available video benchmarks for its design, its reasoning categories, the metric it uses, and the language and culture of the videos. MAIA evaluates Vision Language Models (VLMs) on two aligned tasks: a visual statement verification task, and an open-ended visual question-answering task, both on the same set of video-related questions. It considers twelve reasoning categories that aim to disentangle language and vision relations by highlighting the role of the visual input. Thanks to its carefully taught design, it evaluates VLMs’ consistency and visually grounded natural language comprehension and generation simultaneously through an aggregated metric revealing low results that highlight models’ fragility. Last but not least, the video collection has been carefully selected to reflect the Italian culture, and the language data are produced by native-speakers.

Submission history

From: Alessio Miaschi [view email]
[v1]
Mon, 24 Feb 2025 09:25:51 UTC (9,652 KB)
[v2]
Fri, 30 May 2025 13:57:45 UTC (7,401 KB)
[v3]
Mon, 22 Sep 2025 08:22:14 UTC (5,318 KB)

Source link

#Understanding #Generation #Multimodal #Reasoning #MAIA #Benchmark