Generating Question Relevant Captions to Aid Visual Question Answering

Abstract

Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions.

Jialin Wu
Jialin Wu
Research Scientist

I am interested in enhancing the capabilities of image generation models on info-seeking (world knowledge) queries. Some research questions I am exploring include (1) utilizing search signals during the pre/post-training phases as well as during inference for image generation, and (2) enhancing the factual accuracy of images produced in response to info-seeking queries.