CTA-Net: A Gaze Estimation network based on Dual Feature Aggregation and Attention Cross Fusion

Chenxing Xia1, 2, 3, Zhanpeng Tao1, Wei Wang4, Wenjun Zhao1, Bin Ge1, Xiuju Gao4, Kuan-Ching Li6 and Yan Zhang7

  1. College of Computer Science and Engineering, Anhui University of Science and Technology
    232001 Huainan CHINA
    847990008@qq.com
  2. Institute of Energy, Hefei Comprehensive National Science Center
    230031 Hefei, CHINA
    cxxia@aust.edu.cn
  3. Anhui Purvar Bigdata Technology Co. Ltd
    232001 Huainan, CHINA
  4. Anyang Cigarette Factory, China Tobacco Henan Industrial Co Anyang, CHINA
  5. College of Electrical and Information Engineering, Anhui University of Science and Technology
    232001 Huainan, CHINA
  6. Department of Computer Science and Information Engineering, Providence University
    43301 Taichung City, Taiwan
  7. The School of Electronics and Information Engineering, Anhui University
    Hefei, CHINA

Abstract

Recent work has demonstrated the Transformer model is effective for computer vision tasks. However, the global self-attention mechanism utilized in Transformer models does not adequately consider the local structure and details of images, which may result in the loss of information and local details, causing decreased estimation accuracy in gaze estimation tasks when compared to convolution or sequential stacking methods. To address this issue, we propose a parallel CNNs-Transformer aggregation network (CTA-Net) for gaze estimation, which fully leverages the advantages of the Transformer model in modeling global context while the convolutional neural networks (CNNs) model in retaining local details. Specifically, Transformer and ResNet are deployed to extract facial and eye information, respectively. Additionally, an attention cross fusion (ACFusion) Block is embedded with CNN branch, which decomposes features in space and channels to supplement lost features, suppress noise, and help extract eye features more effectively. Finally, a dual-feature aggregation (DFA) module is proposed to effectively fuse the output features of both branches with the help feature a selection mechanism and a residual structure. Experimental results on the MPIIGaze and Gaze360 datasets demonstrate that our CTA-Net achieves state-of-the-art results.

Key words

Appearance-based gaze estimation, Deep neural networks, Dilated convolution, Fusion, Transformer

Digital Object Identifier (DOI)

https://doi.org/10.2298/CSIS231116020X

Publication information

Volume 21, Issue 3 (June 2024)
Year of Publication: 2024
ISSN: 2406-1018 (Online)
Publisher: ComSIS Consortium

Full text

DownloadAvailable in PDF
Portable Document Format

How to cite

Xia, C., Tao, Z., Wang, W., Zhao, W., Ge, B., Gao, X., Li, K., Zhang, Y.: CTA-Net: A Gaze Estimation network based on Dual Feature Aggregation and Attention Cross Fusion. Computer Science and Information Systems, Vol. 21, No. 3, 831-850. (2024), https://doi.org/10.2298/CSIS231116020X