Evaluating VQA Models' Consistency in the Scientific Domain
Résumé
Visual Question Answering (VQA) in the scientific domain is a challenging task that requires a high-level understanding of the given image to answer a given question. Although having impressive results on the ScienceQA dataset, both LLaVA and MM-CoT models exhibit inconsistent answers when a simple modification is applied to the textual input of the question (e.g., choices re-ordering). In this paper, we propose two approaches that slightly modify the image-question pair without changing the question's meaning to gain a deeper comprehension of VQA models' question understanding: choices permutation and question rephrasing. Along with these two proposed approaches, we introduce two metrics, namely Consistency across Choice Variations (CaCV) and Consistency across Question Variations (CaQV), to measure the consistency of the VQA models. The experimental results show that both LLaVA and MM-CoT give inconsistent answers regardless of the accuracy. We further conducted a comparison between the proposed metrics and the Accuracy metric, demonstrating that relying solely on the Accuracy is inadequate. By revealing the limitations of existing VQA models and the Accuracy metric through evaluation results in the scientific domain, we aim to provide insights for motivating future research.
Domaines
Informatique [cs]
Fichier principal
Khanh_An_Evaluating_VQA_Models__Consistency_in_the_Scientific_Domain.pdf (1.13 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|