Multilayer networks: An untapped tool for understanding bilingual neurocognition


Cross-linguistic similarity is a term so broad and multifaceted that it is not easily defined. The degree of overlap between languages is known to affect lexical competition during online processing and production, and its relevance for second language acquisition has also been established. Nevertheless, determining what makes two languages similar (or not) increases in complexity when multiple levels of the linguistic hierarchy (e.g., phonology, syntax) are considered at once. How can we feasibly account for the patterns of convergence and divergence at each level of representation, as well as the interactions between them? The growing field of network science brings new methodologies to bear on this longstanding question. Below, we summarize current network science approaches to modeling language structure and discuss implications for understanding various linguistic processes. Critically, we stress the particular value of multilayer techniques, unique and powerful in their ability to simultaneously accommodate an array of node-to-node relationships.

Zaharchuk, H. A., & Karuza, E. A. (2021). Multilayer networks: An untapped tool for understanding bilingual neurocognition. Brain and Language, 220, 104977.

Are our brains more prescriptive than our mouths? Experience with dialectal variation in syntax differentially impacts ERPs and behavior


We investigated online auditory comprehension of dialectal variation in English syntax with event-related potential (ERP) analysis of electroencephalographic data. The syntactic variant under investigation was the double modal, comprising two consecutive auxiliary verbs (e.g., might could). This construction appears across subregional dialects of Southern United States English and expresses indirectness or uncertainty. We compared processing of sentences with attested double modals and single modals in two groups of young adult participants: listeners who were either familiar (Southern) or unfamiliar (Unmarked) with double modal constructions. Both Southern and Unmarked listeners engaged rapid error detection (early anterior negativity) and sentence-level reanalysis (P600) in response to attested double modals, relative to single modals. Offline acceptability and intelligibility judgments reflected dialect familiarity, contrary to the ERP data. We interpret these findings in relation to usage-based and socially weighted theories of language processing, which together capture the effects of frequency and standard language ideology.

Zaharchuk, H. A., Shevlin, A., & Van Hell, J. G. (2021). Are our brains more prescriptive than our mouths? Experience with dialectal variation in syntax differentially impacts ERPs and behavior. Brain and Language, 218, 104949.

Beat gestures facilitate speech production


Does gesturing help speakers find the right words? According to several theories of speech-gesture relationships, iconic gestures should facilitate speech production, but beat gestures should not. Here we tested the effects of gesturing on word production in two experiments. Participants produced low-frequency words from their definitions while instructed to perform beat gestures, iconic gestures, or while not given any instructions about gesturing (baseline condition). Compared to baseline, participants were faster to produce the target words while performing beat gestures, bimanually or with their left hand alone, but they were slower to produce the target words when instructed to perform iconic gestures. Results provide the first evidence that beat gestures can help speakers produce words. This benefit may arise from the fact that gestures are motor actions, rather than from any special properties of gestures, per se.

Lucero, C., Zaharchuk, H., & Casasanto, D. (2014). Beat gestures facilitate speech production. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society. Red Hook, NY: Curran Associates.