Multilayer networks: An untapped tool for understanding bilingual neurocognition
Abstract
Cross-linguistic similarity is a term so broad and multifaceted that it is not easily defined. The degree of overlap between languages is known to affect lexical competition during online processing and production, and its relevance for second language acquisition has also been established. Nevertheless, determining what makes two languages similar (or not) increases in complexity when multiple levels of the linguistic hierarchy (e.g., phonology, syntax) are considered at once. How can we feasibly account for the patterns of convergence and divergence at each level of representation, as well as the interactions between them? The growing field of network science brings new methodologies to bear on this longstanding question. Below, we summarize current network science approaches to modeling language structure and discuss implications for understanding various linguistic processes. Critically, we stress the particular value of multilayer techniques, unique and powerful in their ability to simultaneously accommodate an array of node-to-node relationships.