Double-Layer Affective Visual Question Answering Network
- College of Information Engineering, Shanghai Maritime University
Shanghai 201306, China
guo zihan11@163.com - Dept. of Computer Science and Information Engineering, Providence University
Taichung 43301, Taiwan
Abstract
Visual Question Answering (VQA) has attracted much attention recently in both natural language processing and computer vision communities, as it offers insight into the relationships between two relevant sources of information. Tremendous advances are seen in the field of VQA due to the success of deep learning. Based upon advances and improvements, the Affective Visual Question Answering Network (AVQAN) enriches the understanding and analysis of VQA models by making use of the emotional information contained in the images to produce sensitive answers, while maintaining the same level of accuracy as ordinary VQA baseline models. It is a reasonably new task to integrate the emotional information contained in the images into VQA. However, it is challenging to separate question-guided-attention from mood-guided-attention due to the concatenation of the question words and the mood labels in AVQAN. Also, it is believed that this type of concatenation is harmful to the performance of the model. To mitigate such an effect, we propose the Double-Layer Affective Visual Question Answering Network (DAVQAN) that divides the task of generating emotional answers in VQA into two simpler subtasks: the generation of non-emotional responses and the production of mood labels, and two independent layers are utilized to tackle these subtasks. Comparative experimentation conducted on a preprocessed dataset to performance comparison shows that the overall performance of DAVQAN is 7.6% higher than AVQAN, demonstrating the effectiveness of the proposed model. We also introduce more advanced word embedding method and more fine-grained image feature extractor into AVQAN and DAVQAN to further improve their performance and obtain better results than their original models, which proves that VQA integrated with affective computing can improve the performance of the whole model by improving these two modules just like the general VQA.
Key words
deep learning, natural language processing, computer vision, visual question answering, affective computing
Digital Object Identifier (DOI)
https://doi.org/10.2298/CSIS200515038G
Publication information
Volume 18, Issue 1 (January 2021)
Year of Publication: 2021
ISSN: 2406-1018 (Online)
Publisher: ComSIS Consortium
Full text
Available in PDF
Portable Document Format
How to cite
Guo, Z., Han, D., Li, K.: Double-Layer Affective Visual Question Answering Network. Computer Science and Information Systems, Vol. 18, No. 1, 155–168. (2021), https://doi.org/10.2298/CSIS200515038G