Music is a universal language with the profound ability to evoke emotions and trigger memories. This work investigates the potential of predicting the electro-corticogram (ECoG) -a special case of Electro-Encephalography (EEG), acquired intracranially- using the acoustic stimulus in the form of dynamic spectrum. To this end, first the Multivariate Temporal Response Function (mTRF) is utilized as a standard classical method based on linear systems. Secondly, a modern deep learning model, specifically an encoder-decoder (ED) is proposed. The predictions were validated on ECoG data from participants exposed to a musical piece, meticulously preprocessed to remove noise and artifacts. Initial results indicate that the encoder-decoder model significantly outperformed the traditional mTRF method in predicting neural ECoGs associated with music perception, in terms of mean squared error (MSE). |
*** Title, author list and abstract as submitted during Camera-Ready version delivery. Small changes that may have occurred during processing by Springer may not appear in this window.