24.03.2026 Liina Pylkänen
Split-Second Sentence Processing
Human bodies create complex meanings incrementally, but whether we perceive them incrementally largely depends on the temporal structure of the stimulus. Speech, music, and dance unfold over time and therefore enforce serial perception. Written language, by contrast, can be presented all at once, like a visual scene. Recent work shows that when a short sentence is flashed simultaneously, the brain is sensitive to its syntactic structure as early as 130 ms, faster than the duration of a single syllable in speech (Fallon & Pylkkänen, 2024). This raises the possibility that visual language reveals a more general capacity of the brain to grasp structure instantaneously, as long as all the evidence for the structure is available at once.
In this talk, I discuss a new research program on the brain’s response to multi-word expressions flashed visually under conditions that allow only a single glance. A two-stage processing profile emerges, remarkably similar in waveform structure to the brain’s response to a single word. Our results so far conform to a model, dubbed Global-to-Serial Assembly (GLOSA), in which the brain first detects the global form of the stimulus in a snapshot-like manner and then probes its combinatory properties, with at least some serial left-to-right dynamics. The initial stage includes detection of syntactic form when it is present, with enhanced responses when the form maps onto basic phrase-structure
knowledge. In the second stage, structured expressions continue to be analyzed internally, whereas responses to unstructured stimuli flatten out and are no longer modulated by semantic content. Overall, this work shows that at-a-glance language processing offers a powerful window onto
the brain’s inherent temporal dynamics, allowing us to identify the order of computations when the stimulus itself imposes none.