[Submitted on 11 Dec 2024]
View a PDF of the paper titled Illusory VQA: Benchmarking and Enhancing Multimodal Models on Visual Illusions, by Mohammadmostafa Rostamkhani and 4 other authors
Abstract:In recent years, Visual Question Answering (VQA) has made significant strides, particularly with the advent of multimodal models that integrate vision and language understanding. However, existing VQA datasets often overlook the complexities introduced by image illusions, which pose unique challenges for both human perception and model interpretation. In this study, we introduce a novel task called Illusory VQA, along with four specialized datasets: IllusionMNIST, IllusionFashionMNIST, IllusionAnimals, and IllusionChar. These datasets are designed to evaluate the performance of state-of-the-art multimodal models in recognizing and interpreting visual illusions. We assess the zero-shot performance of various models, fine-tune selected models on our datasets, and propose a simple yet effective solution for illusion detection using Gaussian and blur low-pass filters. We show that this method increases the performance of models significantly and in the case of BLIP-2 on IllusionAnimals without any fine-tuning, it outperforms humans. Our findings highlight the disparity between human and model perception of illusions and demonstrate that fine-tuning and specific preprocessing techniques can significantly enhance model robustness. This work contributes to the development of more human-like visual understanding in multimodal models and suggests future directions for adapting filters using learnable parameters.
Submission history
From: Mohammadmostafa Rostamkhani [view email]
[v1]
Wed, 11 Dec 2024 07:51:18 UTC (19,249 KB)
Source link
#Benchmarking #Enhancing #Multimodal #Models #Visual #Illusions