Author: TAN Xueqing,SONG Jun,ZHANG Manman, ZANG Chuanli | Time: 2025-01-02 | Counts: |
TAN X Q, SONG J, ZHANG M M, et al. Generation and recognition of eye movement samples based on generative artificial intelligence[J]. Journal of Henan Polytechnic University(Natural Science) , 2025, 44(1): 145-153.
doi: 10.16186/j.cnki.1673-9787.2024040012
Received: 2024/04/07
Revised: 2024/06/10
Published: 2025/01/02
Generation and recognition of eye movement samples based on generative artificial intelligence
TAN Xueqing1,2,SONG Jun2,ZHANG Manman1, ZANG Chuanli1
1. Faculty of Psychology, Tianjin Normal University, Tianjin 300387, China; 2. School of Mechanical and Power Engineering, Henan Polytechnic University, Jiaozuo 454000, Henan, China
Abstract: Objectives Generative and traditional artificial intelligence models are pivotal tools in the information age. Leveraging these technologies,the generation and identification of eye movement samples have emerged as critical components, facilitating deeper explorations into cognitive mechanisms. Therefore, this study aims to promote the development of generative artificial intelligence in the field of eye tracking technology, solve the problem of eye movement sample generation and the opacity and inexplicability caused by the increase in network depth, and deeply mine eye tracking data related to children’s language development. Methods This study collected data on the eye movement process of 4~6 years old children’s understanding of different focus structures. Generative artificial intelligence model-variational autoencoder(VAE) and traditional models-multilayer perceptron(MLP) were used to identify the developmental differences in their eye movement patterns and attempt to generate new samples. Interpreting generative datasets based on grey relational analysis and confusion matrix. Results The results showed that: (1)the eye movement datasets generated by VAE for 4, 5, and 6 years old children had higher accuracy than the MINIST dataset(mixed National Institute of Standards and Technology), and were consistent with the MLP analysis results, with accuracy, diversity, and certain interpretability; (2)The results of generative eye movement data and confusion matrix indicated that in unfocused structure, children’s understanding level improved at the ages of 4~5 and 5~6, while the eye movement characteristics of object-focus structure and subject-focus structure changed less at the ages of 4~5 and more at the ages of 5~6, indicating that children’s understanding of focus structure was a critical period at the age of 5, which was in line with the development law of children’s understanding of focus structure. Conclusions The artificial intelligence coupling analysis proposed in this article could identify the development patterns of eye movement features and generate reliable new samples, providing new ideas for the combination of generative artificial intelligence and eye movement technology.
Key words: generative artificial intelligence; variational autoencoder; multi-layer perceptron; eye movement