Prior Scene Context Shapes the Neural Dynamics of Face Detection

Tasliyurt-Celebi, S., Kaiser, D., Dobs, K.
bioRxiv (2026).

Abstract

Detecting faces in our surroundings typically only takes a fraction of a second. How can such a rapid perceptual process still be influenced by prior expectations? Here, we used electroencephalography (EEG) to investigate how prior scene context modulates the temporal dynamics of neural face representations. Participants viewed natural scenes containing a single face (left or right), each preceded by either a faceless version of the same scene (preview condition) or a gray screen (no-preview condition), while performing a face detection task. Using multivariate decoding, we found that face location could be decoded shortly after target onset. Critically, decoding accuracy was higher in the preview condition at early stages, whereas the no-preview condition showed higher decoding at later time points, suggesting rapid facilitation by prior context followed by compensatory processing when contextual information was absent. In contrast, time-frequency decoding revealed a sustained preview advantage across alpha, beta, and gamma bands, even during time periods when evoked responses favored the no-preview condition. This dissociation between evoked and induced neural signals indicates that prior scene context engages distinct neural processes: early evoked activity reflects rapid contextual facilitation of sensory representations, while induced oscillatory activity may support prolonged context-dependent modulation. Together, these results show how prior scene context and sensory-driven processing jointly shape rapid face perception through temporally and spectrally distinct neural dynamics.