TY - JOUR
T1 - Audio’s Impact on Deep Learning Models
T2 - A Comparative Study of EEG-Based Concentration Detection in VR Games
AU - GomezRomero-Borquez, Jesus
AU - Del-Valle-Soto, Carolina
AU - Del-Puerto-Flores, José A.
AU - López-Pimentel, Juan Carlos
AU - Castillo-Soria, Francisco R.
AU - Ibarra-Hernández, Roilhi F.
AU - Betancur Agudelo, Leonardo
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/12
Y1 - 2025/12
N2 - This study investigates the impact of audio feedback on cognitive performance during VR puzzle games using EEG analysis. Thirty participants played three different VR puzzle games under two conditions (with and without audio) while their brain activity was recorded. To analyze concentration levels and neural engagement patterns, we employed spectral analysis combined with a preprocessing algorithm and an optimized Deep Neural Network (DNN) model. The proposed processing stage integrates feature normalization, automatic labeling based on Principal Component Analysis (PCA), and Gamma band feature extraction, transforming concentration detection into a supervised classification problem. Experimental validation was conducted under the two gaming conditions in order to evaluate the impact of multisensory stimulation on model performance. The results show that the proposed approach significantly outperforms traditional machine learning classifiers (SVM, LR) and baseline deep learning models (DNN, DGCNN), achieving a 97% accuracy in the audio scenario and 83% without audio. These findings confirm that auditory stimulation reinforces neural coherence and improves the discriminability of EEG patterns, while the proposed method maintains a robust performance under less stimulating conditions.
AB - This study investigates the impact of audio feedback on cognitive performance during VR puzzle games using EEG analysis. Thirty participants played three different VR puzzle games under two conditions (with and without audio) while their brain activity was recorded. To analyze concentration levels and neural engagement patterns, we employed spectral analysis combined with a preprocessing algorithm and an optimized Deep Neural Network (DNN) model. The proposed processing stage integrates feature normalization, automatic labeling based on Principal Component Analysis (PCA), and Gamma band feature extraction, transforming concentration detection into a supervised classification problem. Experimental validation was conducted under the two gaming conditions in order to evaluate the impact of multisensory stimulation on model performance. The results show that the proposed approach significantly outperforms traditional machine learning classifiers (SVM, LR) and baseline deep learning models (DNN, DGCNN), achieving a 97% accuracy in the audio scenario and 83% without audio. These findings confirm that auditory stimulation reinforces neural coherence and improves the discriminability of EEG patterns, while the proposed method maintains a robust performance under less stimulating conditions.
KW - DNN
KW - EEG signal processing
KW - Magnitude-Square Coherence
KW - PCA
KW - spectral entropy
KW - supervised classification
KW - virtual reality
UR - https://www.scopus.com/pages/publications/105025951984
U2 - 10.3390/inventions10060097
DO - 10.3390/inventions10060097
M3 - Artículo en revista científica indexada
AN - SCOPUS:105025951984
SN - 2411-5134
VL - 10
JO - Inventions
JF - Inventions
IS - 6
M1 - 97
ER -