Linda Drijvers

linda_originalFace-to-face communication often consists of an audio-visual binding between auditory input and visual input, such as visible speech and co-speech gestures. These visual signals can help a listener to understand speech in adverse listening conditions, such as in noise, or when you are a non-native listener of a language.

I am interested in the cognitive and neural mechanisms that underlie such multimodal comprehension and production processes. For example, does multimodal language facilitate a listener’s predictions of upcoming speech, and therefore facilitate language production? Is our brain ‘hard-wired’ for processing multimodal language in a face-to-face context?

I use behavioral methods and eye-tracking to study the cognitive underpinnings of these phenomena, and use magnetoencephalography (MEG) and electroencephalography (EEG) to investigate the neural oscillatory dynamics that support these processes.

Specifically, I am now studying u dual-EEG approach to study how oscillatory dynamics support in situ multimodal interaction, and whether natural, face-to-face communication induces a ‘special mode’ for processing communicative messages.

I’m also passionate about science communication. Please see KNAW’s Faces of Science for blogs/videos on my research.

Please also see my my research gate profile or my google scholar profile for my publications.

Contact details MPI
Email linda.drijvers@mpi.nl |  MPI room 315  |  Telephone +31 24 3521591 |
Visit personal page

Contact details Donders Centre for Cognition / Donders Institute / RU Nijmegen 
Email l.drijvers@donders.ru.nl | Spinoza room B.02.34 |
Visit personal page