David Poeppel gives the distinguished career award talk at the Society for the Neurobiology of Language 17th Annual Meeting in Washing, DC on Saturday, September 13, 2025. [link]Rhythms and algorithms: The temporal structure of linguistic experience
Also check out the poster D65 by Jiajie Zou: “Chunking Constrains Prediction during Language Comprehension” on Saturday. [link]
Moore* C, Donhauser* PW, Klein D, Byers-Heinlein K (2025). Efficient neural encoding as revealed by bilingualism. Proceedings of the National Academy of Sciences of the United States of America.
The brain’s capacity to learn multiple languages represents a profound puzzle of neural organization and flexibility. We investigated how neural systems might systematize multiple sound systems by training computational models on natural speech. Our networks, which approximate infant language learning, maintained distinct phonological systems for two and three languages while preserving shared articulatory features. The timing of second language introduction influenced the learning process. Our findings suggest that phonological acquisition may leverage domain-general learning principles, offering a computational framework for understanding how neural systems potentially scale language learning while maintaining critical language-specific distinctions. This research provides crucial insights into the computational principles underlying the brain’s remarkable ability to manage linguistic complexity.
David Poeppel gives a keynote at the Cognitive Computational Neuroscience conference! [link]Rhythms and Algorithms: From Vibrations in the Ear to Abstractions in the Brain
Also check out the poster C141 by Berfin Bastug: Auditory Object Formation in Temporally Complex Scenes on Friday. [link]
A conference paper from Federico and Yue was accepted at CogSci 2025! [link]
Adolfi F, Sun Y, Poeppel D (2025). Content-agnostic online segmentation as a core operation. Proceedings of the Annual Meeting of the Cognitive Science Society
We approach the problem of explaining segmentation — the human capacity to partition input streams into representations of appropriate form and content for efficient downstream processing — by exploring a theoretically minimalistic and computationally plausible account of phoneme-to-word chunking. Through computational models, mathematical proofs, algorithm design, and observer model simulations in two languages, we suggest that online segmentation can be guided by content-agnostic properties of internal memory structures (i.e., lexicality and length type frequency). Our theoretical and empirical findings point to a formal link between such properties with practical performance benefits. Together, these contributions make progress on a fully explicit computational- and algorithmic-level account with plausible implementational-level primitives.
Spoken language is a rapid sequence of auditory information that can incorporate complicated internal structures. Prediction and chunking are two possible mechanisms for the brain to process speech efficiently, and they are often viewed as separate or even opposing mechanisms. Here we investigate whether the two mechanisms interact and hypothesize that the chunk (constituent) structure in spoken language modulates how the brain predicts basic linguistic items, e.g., morphemes, the basic unit of meaning in language. In three magnetoencephalography (MEG) experiments, we characterize the neurally manifested prediction of morphemes using the neural response to morpheme surprisal, and we analyze how this response is modulated by chunks, i.e., the linguistic constituent structure. We demonstrate that the MEG surprisal response is significantly stronger for morphemes within an ongoing chunk than morphemes across a chunk boundary. This chunk-boundary effect on morpheme prediction is further modulated by the certainty of a chunk boundary. The findings are then confirmed by analyzing a dataset of ECoG responses to English narratives. These results establish that the brain employs a chunk-based prediction mechanism and more precisely predicts sequential items within a chunk.
Ample evidence demonstrates synchronized neurophysiological activity to the temporal regularities of sounds. This phenomenon has been proposed to reflect neural responses to onset edges in acoustic signals. Here, we examine listeners’ ability to behaviorally synchronize to stimuli with sharp or gradual onsets. In two experiments, we show that while the sharp amplitude onsets facilitate temporal phase alignment between participants’ behavioral output and the stimulus envelope, sharp onsets are not essential for tracking the rate of auditory rhythms in the acoustic input. The dissociation between phase and rate tracking suggests distinct underlying neural mechanisms that are separately modulated.
Many proposed applications of neural networks in machine learning, cogni- tive/brain science, and society hinge on the feasibility of inner interpretability via circuit discovery. This calls for empirical and theoretical explorations of viable al- gorithmic options. Despite advances in the design and testing of heuristics, there are concerns about their scalability and faithfulness at a time when we lack under- standing of the complexity properties of the problems they are deployed to solve. To address this, we study circuit discovery with classical and parameterized com- putational complexity theory: (1) we describe a conceptual scaffolding to reason about circuit finding queries in terms of affordances for description, explanation, prediction and control; (2) we formalize a comprehensive set of queries for mech- anistic explanation, and propose a formal framework for their analysis; (3) we use it to settle the complexity of many query variants and relaxations of practical in- terest on multi-layer perceptrons. Our findings reveal a challenging complexity landscape. Many queries are intractable, remain fixed-parameter intractable rela- tive to model/circuit features, and inapproximable under additive, multiplicative, and probabilistic approximation schemes. To navigate this landscape, we prove there exist transformations to tackle some of these hard problems with better- understood heuristics, and prove the tractability or fixed-parameter tractability of more modest queries which retain useful affordances. This framework allows us to understand the scope and limits of interpretability queries, explore viable options, and compare their resource demands on existing and future architectures.
Grabenhorst M, Poeppel D, Michalareas G (2025). Neural signatures of temporal anticipation in human cortex represent event probability density. Nature Communications.
Temporal prediction is a fundamental function of neural systems. Recent results show that humans anticipate future events by calculating probability density functions, rather than hazard rates. However, direct neural evidence for this hypothesized mechanism is lacking. We recorded neural activity using magnetoencephalography as participants anticipated auditory and visual events distributed in time. We show that temporal anticipation, measured as reaction times, approximates the event probability density function, but not hazard rate. Temporal anticipation manifests as spatiotemporally patterned activity in three anatomically and functionally distinct parieto-temporal and sensorimotor cortical areas. Each of these areas revealed a marked neural signature of anticipation: Prior to sensory cues, activity in a specific frequency range of neural oscillations, spanning alpha and beta ranges, encodes the event probability density function. These neural signals predicted reaction times to imminent sensory cues. These results demonstrate that supra-modal representations of probability density across cortex underlie the anticipation of future events.
Auditory and speech signals are undisputedly processed in both left and right hemispheres, but this bilateral allocation is likely unequal. The Asymmetric Sampling in Time (AST) hypothesis proposed a division of labor that has its neuroanatomical basis in the distribution of neuronal ensembles with differing temporal integration constants: left auditory areas house a larger proportion of ensembles with shorter temporal integration windows (tens of milliseconds), suited to process rapidly changing signals; right auditory areas host a larger proportion with longer time constants (∼150–300 ms), ideal for slowly changing signals. Here we evaluate the large body of findings that clarifies this relationship between auditory temporal structure and functional lateralization. In this reappraisal, we unpack whether this relationship is influenced by stimulus type (speech/nonspeech), stimulus temporal extent (long/short), task engagement (high/low), or (imaging) modality (hemodynamic/electrophysiology/behavior). We find that the right hemisphere displays a clear preference for slowly changing signals whereas the left-hemispheric preference for rapidly changing signals is highly dependent on the experimental design. We consider neuroanatomical properties potentially linked to functional lateralization, contextualize the results in an evolutionary perspective, and highlight future directions.
Francesco Mantegna, Joan Orpella, David Poeppel (2024). "Time-resolved hemispheric lateralization of audiomotor functional connectivity during covert speech production." Cell Reports.
Covert speech involves the internal generation of articulatory movements and their sensory consequences. While overt speech involves a combination of feedforward and feedback signals, feedback signals may be substantially different, or even absent, during covert speech. Despite the differences, we conjectured that sensorimotor interareal communication during covert speech is implemented through the same channels recruited during overt speech. An influential overt speech model proposed that feedforward and feedback signals are segregated to the left and right hemispheres, respectively. Here, we used magnetoencephalography to investigate the lateralization of functional connectivity before and after covert speech production. The data reveal leftward lateralization preceding and rightward lateralization following predicted covert speech onset. This alternating lateralization pattern is observed only in the connection between premotor and auditory regions and in the alpha frequency band. The electrophysiological data, derived entirely from covert speech, add a provocative perspective to adjudicate between overt speech motor control models.
Pitch is fundamental to various facets of human hearing, including music appreciation, speech comprehension, vocal learning, and sound source differentiation. How does the brain encode the perceptual features of pitch? By applying machine learning techniques to time-resolved neuroimaging data of individuals listening to different pitches, our findings reveal that pitch height and chroma-two distinct features of pitch-are associated with different neural dynamics within the auditory-frontal cortical network, with height playing a more prominent role. This offers a unified theoretical framework for understanding the perceptual and neural characteristics of pitch perception and opens new avenues for noninvasively decoding human auditory perception to develop brain-computer interfaces.