<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title>Frontiers in Human Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Hum. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnhum.2024.1370572</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Human Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Predicting language outcome at birth</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ortiz-Barajas</surname> <given-names>Maria Clemencia</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2358407/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/formal-analysis/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>CNRS, IKER (URM 5478)</institution>, <addr-line>Bayonne</addr-line>, <country>France</country></aff>
<author-notes>
<fn fn-type="edited-by" id="fn0001">
<p>Edited by: Marcela Pena, Pontificia Universidad Cat&#x00F3;lica de Chile, Chile</p>
</fn>
<fn fn-type="edited-by" id="fn0002">
<p>Reviewed by: Alessandro Tavano, Max Planck Society, Germany</p>
<p>Sari Ylinen, Tampere University, Finland</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Maria Clemencia Ortiz-Barajas, <email>mariac.ortizb@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>07</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>18</volume>
<elocation-id>1370572</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>01</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>06</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2024 Ortiz-Barajas.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Ortiz-Barajas</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Even though most children acquire language effortlessly, not all do. Nowadays, language disorders are difficult to diagnose before 3&#x2013;4&#x2009;years of age, because diagnosis relies on behavioral criteria difficult to obtain early in life. Using electroencephalography, I investigated whether differences in newborns&#x2019; neural activity when listening to sentences in their native language (French) and a rhythmically different unfamiliar language (English) relate to measures of later language development at 12 and 18&#x2009;months. Here I show that activation differences in the theta band at birth predict language comprehension abilities at 12 and 18&#x2009;months. These findings suggest that a neural measure of language discrimination at birth could be used in the early identification of infants at risk of developmental language disorders.</p>
</abstract>
<kwd-group>
<kwd>EEG</kwd>
<kwd>theta activity</kwd>
<kwd>newborns</kwd>
<kwd>language development</kwd>
<kwd>predictability</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="62"/>
<page-count count="13"/>
<word-count count="9962"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Cognitive Neuroscience</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<label>1</label>
<title>Introduction</title>
<p>Most children acquire their native language(s) rapidly and effortlessly during the first years of life regardless of culture (<xref ref-type="bibr" rid="ref16">Kuhl, 2004</xref>). However, this is not always the case. Around 7% of kindergarten children (5&#x2013;6&#x2009;years) (<xref ref-type="bibr" rid="ref55">Tomblin et al., 1997</xref>) are identified as having specific language impairment (SLI, also known as developmental language disorder, DLD), a disorder characterized by the difficulty to understand and produce spoken language in the absence of other cognitive deficits. Another 5 to 17% of school children suffer from dyslexia (<xref ref-type="bibr" rid="ref50">Shaywitz, 1998</xref>), a specific deficit in reading acquisition not attributable to low IQ, poor education or neurological damage (<xref ref-type="bibr" rid="ref42">Ramus and Ahissar, 2012</xref>). If untreated, these disorders can have an impact on many aspects of the child&#x2019;s life (social, behavioral, academic), which can persist until adulthood. Nowadays, language disorders are difficult to diagnose before 3&#x2013;4&#x2009;years of age (<xref ref-type="bibr" rid="ref3">Cristia et al., 2014</xref>), because diagnosis relies on behavioral criteria that are difficult to obtain early in life. However, children with learning or reading disabilities typically show deficits in speech perception earlier than when their disorder is diagnosed (<xref ref-type="bibr" rid="ref19">Kuhl et al., 2005</xref>). Identifying measures that could allow their earlier detection is fundamental for the design of earlier interventions.</p>
<p>Previous research has shown that phonological deficits are often found in individuals with dyslexia and/or SLI (<xref ref-type="bibr" rid="ref41">Ramus, 2003</xref>; <xref ref-type="bibr" rid="ref49">Schulte-K&#x00F6;rne and Bruder, 2010</xref>; <xref ref-type="bibr" rid="ref20">Leonard, 2014</xref>). However, whether these deficits are speech-specific or related to basic auditory perception is still under debate (<xref ref-type="bibr" rid="ref23">Lorusso et al., 2014</xref>; <xref ref-type="bibr" rid="ref2">Cantiani et al., 2016</xref>). Furthermore, deficits processing auditory information in early infancy/childhood have been shown to relate to poorer later language and literacy skills in school (<xref ref-type="bibr" rid="ref30">Molfese, 2000</xref>; <xref ref-type="bibr" rid="ref21">Lepp&#x00E4;nen et al., 2010</xref>; <xref ref-type="bibr" rid="ref58">Van Zuijen et al., 2013</xref>; <xref ref-type="bibr" rid="ref48">Schaadt et al., 2015</xref>; <xref ref-type="bibr" rid="ref2">Cantiani et al., 2016</xref>; <xref ref-type="bibr" rid="ref22">Lohvansuu et al., 2018</xref>). <xref ref-type="bibr" rid="ref30">Molfese (2000)</xref> found that the amplitude and latency of ERPs recorded at birth while infants listened to speech and non-speech sounds, could predict with 81% accuracy whether at 8&#x2009;years of age children would be identified as normal, poor or dyslexic readers. In another newborn study, <xref ref-type="bibr" rid="ref21">Lepp&#x00E4;nen et al. (2010)</xref> showed that children with familial risk for dyslexia exhibited atypical processing of sound frequency at birth, as evidenced by their ERP response to tones varying in pitch. Additionally, these early differences in auditory processing were related to phonological skills and letter knowledge before school age, as well as to phoneme duration perception, reading speed and spelling accuracy in the second grade of school (<xref ref-type="bibr" rid="ref21">Lepp&#x00E4;nen et al., 2010</xref>). Similarly, <xref ref-type="bibr" rid="ref2">Cantiani et al. (2016)</xref> investigated Rapid Auditory Processing (RAP) abilities in 6&#x2009;months-olds at risk for Language Learning Impairment (LLI), by assessing their discrimination of pairs of tones varying in frequency and duration. They found their ERPs to be atypical and to be predictive of their expressive vocabulary at 20&#x2009;months (<xref ref-type="bibr" rid="ref2">Cantiani et al., 2016</xref>). More recently, <xref ref-type="bibr" rid="ref29">Mittag et al. (2021)</xref> used magnetoencephalography (MEG) to investigate auditory processing of white noise in 6 and 12-month-olds. They found atypical auditory responses in infants at risk for dyslexia, which predicted syntactic processing between 18 and 30&#x2009;months, and as well as word production at 18 and 21&#x2009;months. However, this predictive relation was not found for the control infants.</p>
<p>Other studies have also investigated whether early speech perception abilities relate to later language acquisition. This is supported by the native language neural commitment (NLNC) hypothesis (<xref ref-type="bibr" rid="ref15">Kuhl, 2000</xref>, <xref ref-type="bibr" rid="ref16">2004</xref>) which proposes that early linguistic experience with the native language produces dedicated neural networks that influence the brain&#x2019;s ability to learn language. This hypothesis suggests that infants&#x2019; early skills in native-language phonetic perception should predict infants&#x2019; later language abilities (<xref ref-type="bibr" rid="ref16">Kuhl, 2004</xref>). <xref ref-type="bibr" rid="ref57">Tsao et al. (2004)</xref> tested this hypothesis by performing one of the first studies exploring the link between speech perception and language acquisition before the age of 2&#x2009;years. They used the conditioned head-turn task to test 6-month-old infants on a speech discrimination task (a vowel contrast perceived by adults as native), and found significant correlations between their speech perception skills at 6&#x2009;months and vocabulary measures (words understood, words produced and phrases understood) at 13, 16 and 24&#x2009;months. In a follow-up study, <xref ref-type="bibr" rid="ref19">Kuhl et al. (2005)</xref> tested a similar paradigm on 7&#x2009;month-olds, this time with two conditions: one contrast from their native language, and one from a non-native language. They found that both native and non-native phonetic perception abilities were related to later measures of language outcome but in opposite directions: better native-language discrimination at 7&#x2009;months was positively correlated to expressive vocabulary at 18 and 24&#x2009;months, whereas better non-native-language discrimination was negatively correlated to expressive vocabulary at 18 and 24&#x2009;months. These findings were supported by an electrophysiological study comparing ERP responses in 11-month-olds to native and foreign speech contrasts (<xref ref-type="bibr" rid="ref45">Rivera-Gaxiola et al., 2005</xref>). They showed that infants who exhibited larger (more positive) P150-250 amplitudes to the foreign deviant with respect to the standard produced more words at 18, 22, 25, 27, and 30&#x2009;months, than those who displayed larger (more negative) N250-550 amplitudes to the foreign deviant with respect to the standard, at the same ages. A later ERP study from the same team showed that ERP responses to native and non-native contrasts at 7&#x2009;months also related to later language outcomes, again in opposing directions: greater negativity of the MMN (mismatch negativity) to native language phonetic contrasts at 7&#x2009;months was associated with a larger number of words produced at 18 and 24&#x2009;months, whereas more negative MMNs to non-native language phonetic contrasts at 7&#x2009;months predicted fewer words produced at 24&#x2009;months (<xref ref-type="bibr" rid="ref18">Kuhl et al., 2008</xref>). <xref ref-type="bibr" rid="ref18">Kuhl et al. (2008)</xref> suggest that increased sensitivity in the perception of native phonetic contrasts is indicative of neural commitment to the native language, whereas sensitivity to non-native contrasts reveals uncommitted neural circuitry. The ERP responses shown in these studies seem to be a reflection of this level of neural commitment, which in turn predicts language scores at later ages (<xref ref-type="bibr" rid="ref45">Rivera-Gaxiola et al., 2005</xref>; <xref ref-type="bibr" rid="ref18">Kuhl et al., 2008</xref>).</p>
<p>Previous linguistic studies focused on the discrimination of phonetic contrasts as the early measure of speech perception that could predict later language skills. A recent electroencephalography (EEG) study explored whether neural tracking of sung nursery rhymes during infancy could predict language development in infants with high likelihood of autism (<xref ref-type="bibr" rid="ref27">Menn et al., 2022</xref>). Autistic children often show delay in language acquisition (<xref ref-type="bibr" rid="ref11">Howlin, 2003</xref>), which is why identifying measures that could predict later language skills is relevant for this population. <xref ref-type="bibr" rid="ref27">Menn et al. (2022)</xref> found that infants with higher speech-brain coherence in the stressed syllable rate (1&#x2013;3&#x2009;Hz) at 10&#x2009;months showed higher receptive and productive vocabulary (words understood and words produced) at 24&#x2009;months, but no relationship with later autism symptoms. They suggest that these results could reflect a relationship between infants&#x2019; tracking of stressed syllables and word-segmentation skills (<xref ref-type="bibr" rid="ref27">Menn et al., 2022</xref>), which in turn predict later vocabulary development (<xref ref-type="bibr" rid="ref12">Junge et al., 2012</xref>; <xref ref-type="bibr" rid="ref14">Kooijman et al., 2013</xref>). Similarly, a recent study investigating word learning at birth revealed that neonates can memorize disyllabic words so that having learnt the first syllable they can predict the word ending, and the quality of word-form learning predicts expressive language skills at 2&#x2009;years (<xref ref-type="bibr" rid="ref54">Suppanen et al., 2022</xref>).</p>
<p>To my knowledge, most studies investigating infant speech perception abilities as possible predictors of later language development have tested infants using phonetic contrasts (<xref ref-type="bibr" rid="ref57">Tsao et al., 2004</xref>; <xref ref-type="bibr" rid="ref19">Kuhl et al., 2005</xref>; <xref ref-type="bibr" rid="ref45">Rivera-Gaxiola et al., 2005</xref>; <xref ref-type="bibr" rid="ref18">Kuhl et al., 2008</xref>), bi-syllabic pseudo-words (<xref ref-type="bibr" rid="ref54">Suppanen et al., 2022</xref>), and nursery rhymes (<xref ref-type="bibr" rid="ref27">Menn et al., 2022</xref>). However, perception abilities of natural speech have rarely been used as predictors. Here, I explore the potential of using EEG measures at birth in response to naturally spoken sentences in the native language (prenatally heard) and a rhythmically different unfamiliar language as predictors of later language skills in typically-developing infants.</p>
<p>At birth, infants are equipped with a rich set of speech perception abilities that help them acquire language from the get-go. Some of these are universal, broad-based abilities, in place independently of what language they heard <italic>in utero</italic> (<xref ref-type="bibr" rid="ref35">Ortiz Barajas and Gervain, 2021</xref>). For instance, newborns can recognize speech, and show preference for it over equally complex speech analogs (<xref ref-type="bibr" rid="ref61">Vouloumanos and Werker, 2007</xref>). They are also able to discriminate two languages, even if they are unfamiliar to them, on the basis of their different rhythms (<xref ref-type="bibr" rid="ref26">Mehler et al., 1988</xref>; <xref ref-type="bibr" rid="ref34">Nazzi et al., 1998</xref>; <xref ref-type="bibr" rid="ref43">Ramus et al., 2000</xref>), but they are unable to discriminate them if their rhythms are similar (<xref ref-type="bibr" rid="ref34">Nazzi et al., 1998</xref>; <xref ref-type="bibr" rid="ref43">Ramus et al., 2000</xref>). Interestingly, newborns also exhibit speech perception abilities shaped by prenatal experience with the language(s) spoken by their mother during the last trimester of pregnancy. Newborns&#x2019; prenatal experience with speech mainly consists of language prosody, i.e., rhythm and melody, because maternal tissues filter out the higher frequencies, necessary for the identification of individual phonemes, but preserve the low-frequency components that carry prosody (<xref ref-type="bibr" rid="ref40">Pujol et al., 1991</xref>). On the basis of this experience, newborns are able to recognize their native language, and prefer it over other languages (<xref ref-type="bibr" rid="ref26">Mehler et al., 1988</xref>; <xref ref-type="bibr" rid="ref32">Moon et al., 1993</xref>). Furthermore, it has been shown that recognizing the language heard <italic>in utero</italic>, goes beyond simply discriminating it from an unfamiliar one, as monolingual and bilingual newborns exhibit different patterns when presented with the same pair of rhythmically different languages: monolinguals, who are familiar with one of the languages being contrasted, discriminate them, and prefer the familiar language; while bilinguals, who are familiar with both languages being contrasted, discriminate them and show equal preference for both languages (<xref ref-type="bibr" rid="ref1">Byers-Heinlein et al., 2010</xref>).</p>
<p>Building up on previous research showing that the discrimination of native/non-native phonetic contrasts predicts later language skills (<xref ref-type="bibr" rid="ref19">Kuhl et al., 2005</xref>; <xref ref-type="bibr" rid="ref45">Rivera-Gaxiola et al., 2005</xref>; <xref ref-type="bibr" rid="ref18">Kuhl et al., 2008</xref>), here I explore whether newborns&#x2019; ability to discriminate languages on the basis of their different rhythms could relate to language development. It has been suggested that individuals with dyslexia have difficulty extracting stimulus regularities from auditory inputs (<xref ref-type="bibr" rid="ref5">Daikhin et al., 2017</xref>), therefore a rhythmic discrimination task, which requires detecting regularities in speech rhythm, represents a good predictor candidate for this population.</p>
<p>The neural mechanisms that support rhythmic discrimination in infants are not fully understood (<xref ref-type="bibr" rid="ref35">Ortiz Barajas and Gervain, 2021</xref>). Previous infant studies have shown that low-frequency neural activity (delta and/or theta band) reflect language discrimination at birth (<xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>) and at 4.5&#x2009;months (<xref ref-type="bibr" rid="ref33">Nacar Garcia et al., 2018</xref>). Since rhythm is carried by the low-frequency components of the speech signal (<xref ref-type="bibr" rid="ref46">Rosen, 1992</xref>), specifically the syllabic rate, it is reasonable for rhythm to be encoded by the low-frequency oscillations delta and theta. In adults, theta activity has been claimed to support the processing of syllables. This claim has mainly been based on two facts: (1) the syllabic rate of speech, roughly 4-5&#x2009;Hz (<xref ref-type="bibr" rid="ref6">Ding et al., 2017</xref>; <xref ref-type="bibr" rid="ref60">Varnet et al., 2017</xref>), corresponds to the frequencies of the theta band (<xref ref-type="bibr" rid="ref9">Giraud and Poeppel, 2012</xref>), and (2) brain responses in the theta band have been shown to synchronize to the speech envelope, corresponding to the slow overall amplitude fluctuations of the speech signal over time, with peaks occurring roughly at the syllabic rate (<xref ref-type="bibr" rid="ref10">Gross et al., 2013</xref>; <xref ref-type="bibr" rid="ref31">Molinaro et al., 2016</xref>; <xref ref-type="bibr" rid="ref59">Vander Ghinst et al., 2016</xref>; <xref ref-type="bibr" rid="ref63">Zoefel and VanRullen, 2016</xref>; <xref ref-type="bibr" rid="ref39">Pefkou et al., 2017</xref>; <xref ref-type="bibr" rid="ref52">Song and Iverson, 2018</xref>). Furthermore, newborns&#x2019; neural activity has been found to track (synchronize to) the speech envelope of familiar and unfamiliar languages equally well, suggesting that envelope tracking at birth represents a basic auditory ability that helps newborns encode the speech rhythm of familiar and unfamiliar languages, supporting language discrimination (<xref ref-type="bibr" rid="ref36">Ortiz Barajas et al., 2021</xref>; <xref ref-type="bibr" rid="ref35">Ortiz Barajas and Gervain, 2021</xref>).</p>
<p>To explore the use of a neural measure of language discrimination at birth as a predictor of language outcome, I recorded EEG data from 51 full-term, healthy newborns (mean age: 2.39&#x2009;days; range: 1&#x2013;5&#x2009;days; 20 females), born to French monolingual mothers, while they listened to naturally spoken sentences in three languages: their native language, i.e., the language heard prenatally, French, a rhythmically similar unfamiliar language, Spanish, and a rhythmically different unfamiliar language, English (<xref ref-type="fig" rid="fig1">Figure 1A</xref> illustrates the study design). As infants were tested within their first 5&#x2009;days of life, their experience with speech was mostly prenatal. Based on the above mentioned speech perception abilities, it is reasonable to assume that participants should be able to discriminate and prefer the prenatally heard language French (syllable-timed) from English (stress-timed) based on their different rhythms, but not from Spanish (syllable-timed), as they are rhythmically similar. Given that stimuli are presented in 7&#x2009;min blocks, and languages are not contrasted closely, I hypothesize that for language recognition to take place, the newborn brain compares each language to the long-term representation it has formed from prenatal experience, in order to recognize familiar features. This hypothesis is supported by one recent study from my team investigating the role of prenatal experience on long-range temporal correlations (LRTC) using a superset of the EEG dataset used here (<xref ref-type="bibr" rid="ref24">Mariani et al., 2023</xref>), revealing that the newborn brain exhibits stronger correlations in the theta band after being exposed to the native language (French) than to the rhythmically similar (Spanish) and the rhythmically different (English) unfamiliar languages, indicating the early emergence of brain specialization for the native language. These findings support the hypothesis that participants from this study did recognize the prenatally heard language, and that such recognition is reflected by theta activity.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>EEG experimental setup and design. <bold>(A)</bold> Experiment block design. ISI: Interstimulus interval, IBI: Interblock interval. <bold>(B)</bold> Location of recorded channels according to the international 10&#x2013;20 system. Figure adapted from <xref ref-type="bibr" rid="ref36">Ortiz Barajas et al. (2021)</xref>.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g001.tif"/>
</fig>
<p>I assessed language rhythmic discrimination at birth as the neural activation difference between the native language (French) and the rhythmically different unfamiliar language (English). I expect this discrimination measure to reflect neural commitment to the native language and in turn to predict language scores at later ages as follows: higher discrimination measures should predict higher language scores, reflecting commitment to the native language, whereas lower discrimination measures should predict lower language scores, reflecting uncommitted neural circuitry. Spanish sentences were presented in this experiment as part of a larger project investigating speech perception at birth. However, here I do not present results for Spanish, as I focus on the rhythmic discrimination of the native language (French) and the rhythmically different unfamiliar language (English).</p>
<p>To explore the potential use of this neural discrimination measure as a predictor of language outcome, participants were followed longitudinally in order to describe their developmental trajectory, and to look at their individual variability. <xref ref-type="fig" rid="fig2">Figure 2</xref> displays the timeline of the longitudinal study: EEG data were recorded at birth, followed by the collection of information about the participants&#x2019; vocabulary size at 12 and 18&#x2009;months using the MacArthur-Bates Communicative Developmental Inventory (CDI) questionnaires. The participants&#x2019; receptive and expressive vocabulary sizes were estimated from the CDI questionnaires, in order to track their language development, and relate it to their neural measures at birth. To assess the predictive role of language discrimination at birth on later language abilities, I conducted a path analysis including newborns&#x2019; performance at discriminating the native language (French) from a rhythmically different unfamiliar one (English), and their measures of vocabulary size at 12 and 18&#x2009;months (number of words understood and number of words produced). A total of 51 infants contributed neural data at birth, and 35 of them contributed with at least one CDI questionnaire at the subsequent ages. Vocabulary data were collected from 27 participants at 12&#x2009;months, and 30 participants at 18&#x2009;months (<xref ref-type="supplementary-material" rid="SM1">Supplementary Table S1</xref>).</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Study timeline indicating the time points when longitudinal data were collected, and displaying some of the developing speech perception (solid boxes) and production (dashed boxes) abilities children exhibit during the first 18&#x2009;months of life.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g002.tif"/>
</fig>
</sec>
<sec sec-type="materials|methods" id="sec2">
<label>2</label>
<title>Materials and methods</title>
<p>The EEG data from this study were acquired as part of a larger project that aimed to investigate speech perception during the first two years of life. One previous publication presented a superset of the current dataset (47 participants) evaluating speech envelope tracking in newborns and 6-month-olds (<xref ref-type="bibr" rid="ref36">Ortiz Barajas et al., 2021</xref>). A second publication, evaluating the role of neural oscillations during speech processing at birth, presented a subset (40 participants) of the initial publication (<xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>). A third publication, exploring changes in neural dynamics at birth, presented a subset (33 participants) of the initial publication (<xref ref-type="bibr" rid="ref24">Mariani et al., 2023</xref>). These three publications evaluated different hypotheses, therefore analyzing different aspects of the data, which explains the differences in sample size. The EEG dataset used in this manuscript (29 participants) represents a subset of that used in the previous publications, as not all participants contributed with vocabulary measures at 12 and 18&#x2009;months The processed EEG data that support the findings of this study have been deposited in the OSF repository <ext-link xlink:href="https://osf.io/4w69p" ext-link-type="uri">https://osf.io/4w69p</ext-link>.</p>
<sec id="sec3">
<label>2.1</label>
<title>Participants</title>
<p>The protocol for this study was approved by the CER Paris Descartes ethics committee of the Paris Descartes University (currently, Universit&#x00E9; Paris Cit&#x00E9;). All parents gave written informed consent prior to participation, and were present during the testing session.</p>
<p>For the first measure of the study, newborns were recruited at the maternity ward of the Robert-Debr&#x00E9; Hospital in Paris, where they were tested during their hospital stay. The inclusion criteria were: (i) being full-term and healthy, (ii) having a birth weight&#x2009;&#x003E;&#x2009;2,800&#x2009;g, (iii) having an Apgar score&#x2009;&#x003E;&#x2009;8, (iv) being maximum 5&#x2009;days old, and (v) being born to French native speaker mothers who spoke this language at least 80% of the time during the last trimester of the pregnancy according to self-report. A total of 54 newborns took part in the EEG experiment. However, 3 participants failed to complete the recording due to fussiness and crying (<italic>n</italic>&#x2009;=&#x2009;2), or technical problems (<italic>n</italic>&#x2009;=&#x2009;1); and were thus excluded from the longitudinal study. The remaining <bold>51 newborns</bold> (20 girls, 31 boys; age 2.39&#x2009;&#x00B1;&#x2009;1.17 d; range 1&#x2013;5 d) were followed longitudinally by means of the CDI questionnaires.</p>
<p>For the second and third measures of the study, parents of the infants who contributed with EEG data at birth were requested to fill out vocabulary questionnaires when their children turned 12 and 18&#x2009;months. As it is often the case in longitudinal studies, some of the participants did not contribute measures to all the assessments. A total of <bold>35 participants</bold> contributed at least one vocabulary questionnaire (at 12 and/or 18&#x2009;months), of which 27 participants contributed CDI data at 12&#x2009;months, and 30 participants at 18&#x2009;months. <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S1</xref> presents the list of participants and the data points that they contributed longitudinally.</p>
<p>From the 35 participants who contributed EEG recordings and vocabulary data, 6 participants were excluded due to bad EEG data quality in at least one of the language conditions of interest (French and English). Therefore, a final sample of <bold>29 participants</bold> contributed good quality EEG data at birth, and were included in the prediction analyses: a subset of <bold>22 participants</bold> contributed CDI data at 12&#x2009;months, while a subset of <bold>27 participants</bold> contributed CDI data at 18&#x2009;months.</p>
</sec>
<sec id="sec4">
<label>2.2</label>
<title>Procedure</title>
<p><xref ref-type="fig" rid="fig2">Figure 2</xref> presents a timeline highlighting the three ages when data were collected: EEG data at birth, and CDI data at 12&#x2009;months and 18&#x2009;months.</p>
<p>For the first measure of the study, newborns were presented with naturally spoken sentences in three languages while their neural activity was simultaneously recorded using EEG. The recording sessions were conducted in a dimmed, quiet room at the Robert-Debr&#x00E9; Hospital in Paris, while newborns were comfortably asleep or at rest in their hospital bassinets. The stimuli were delivered bilaterally through two loudspeakers positioned on each side of the bassinet (<xref ref-type="fig" rid="fig2">Figure 2</xref>, EEG recording at birth) using the experimental software E-Prime. The sound volume was set to a comfortable conversational level (~65&#x2013;70&#x2009;dB). Participants were divided into 3 groups, where each group heard a different set of sentences: 17 newborns heard set1, 17 newborns heard set2, and 17 newborns heard set3. <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S2</xref> presents the three sets of sentences used in the study. Participants were presented with one sentence per language (French, English, Spanish), which was repeated 100 times to ensure sufficiently good data quality. The experiment consisted of 3 blocks, each block containing the 100 repetitions of the test sentence in a given language, each block thus lasted around 7&#x2009;min. An interstimulus interval of random duration (between 1 and 1.5&#x2009;s) was introduced between sentence repetitions, and an interblock interval of 10&#x2009;s was introduced between language blocks (<xref ref-type="fig" rid="fig1">Figure 1A</xref>). The order of the languages was pseudo-randomized and approximately counterbalanced across participants. The entire recording session lasted about 21&#x2009;min.</p>
<p>For the second and third measures of the study, parents were requested to fill out the French version of the MacArthur-Bates Communicative Developmental Inventory (CDI) questionnaires (<xref ref-type="bibr" rid="ref13">Kern, 2007</xref>) when their child turned 12 and 18&#x2009;months. In each case they were asked to return the questionnaire before their child turned 13 and 19&#x2009;months respectively, to ensure that the measurement would not exceed these age limits. In order to make it easier for parents to complete the questionnaires, I provided them with the short version of the CDI, which is one page long. The short version CDI has been shown to be as reliable as the original version for the English CDI (<xref ref-type="bibr" rid="ref7">Floccia et al., 2018</xref>). For the measurement at 12&#x2009;months I used the <italic>Words and Gestures</italic> CDI, which inquires about the child&#x2019;s babbling skills, provides a list of 83 words for parents to indicate whether the child understands them and spontaneously produces them, and a list of 25 gestures for them to indicate if the child makes them (e.g., shake the head to say no). For the measurement at 18&#x2009;months I used the <italic>Words and Sentences</italic> CDI, which provides a list of 97 words for parents to indicate whether the child understands them and spontaneously produces them, and inquires whether the child has started to combine words together.</p>
</sec>
<sec id="sec5">
<label>2.3</label>
<title>Stimuli</title>
<p>At birth, I presented infants with sentences in three languages: their native language (French), a rhythmically similar unfamiliar language (Spanish), and a rhythmically different unfamiliar language (English). The stimuli consisted of sentences taken from the story Goldilocks and the Three Bears. Sentences were divided in three sets, where each set comprised the translation of a single utterance into the 3 languages (English, French and Spanish). For instance, set 1 contained the following three sentences: <italic>The bears lived all together in a beautiful house</italic> (English); <italic>Les ours habitaient tous ensemble dans une maison</italic> (French); <italic>Los osos viv&#x00ED;an juntos en una casa</italic> (Spanish). The translations were slightly modified by adding or removing adjectives (or phrases) from certain sentences in order to match the duration and syllable count across languages within the same set. All sentences were recorded in mild infant-directed speech by a female native speaker of each language (a different speaker for each language), at a sampling rate of 44.1&#x2009;kHz. There were no significant differences between the sentences in the three languages in terms of minimum and maximum pitch, pitch range and average pitch. <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S2</xref> presents detailed information about the 9 sentences used as stimuli (i.e., duration, syllable count, pitch), and <xref ref-type="supplementary-material" rid="SM1">Supplementary Figures S1, S2</xref> display the sentences&#x2019; time-series, and frequency spectra, respectively. Additionally, the amplitude and frequency modulation spectra as defined by <xref ref-type="bibr" rid="ref60">Varnet et al. (2017)</xref> are presented in the <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S3</xref>. Utterances were found to be similar in every spectral decomposition. The intensity of all recordings was adjusted to 77&#x2009;dB.</p>
</sec>
<sec id="sec6">
<label>2.4</label>
<title>EEG data acquisition</title>
<p>EEG data were recorded at birth with active electrodes and an acquisition system from Brain Products (actiCAP &#x0026; actiCHamp, Brain Products GmbH, Gilching, Germany). A 10-channel layout was used to acquire cortical responses from the following scalp positions: F7, F3, FZ, F4, F8, T7, C3, CZ, C4, T8 (<xref ref-type="fig" rid="fig1">Figure 1B</xref>). These recording locations were chosen in order to include those where auditory and speech perception related neural responses are typically observed in infants (<xref ref-type="bibr" rid="ref53">Stefanics et al., 2009</xref>; <xref ref-type="bibr" rid="ref56">T&#x00F3;th et al., 2017</xref>) (channels T7 and T8 used to be called T3 and T4 respectively). An additional electrode was placed on each mastoid for online reference, and a ground electrode was placed on the forehead. Data were referenced online to the average of the two mastoid channels, and they were not re-referenced offline. Data were recorded at a sampling rate of 500&#x2009;Hz, and online filtered with a high cutoff filter at 200&#x2009;Hz, a low cutoff filter at 0.01&#x2009;Hz and an 8&#x2009;kHz (&#x2212;3&#x2009;dB) anti-aliasing filter. The electrode impedances were kept below 140 k&#x03A9;.</p>
</sec>
<sec id="sec7">
<label>2.5</label>
<title>EEG data analysis</title>
<p>The EEG data were processed using custom Matlab&#x00AE; scripts. To extract the low-frequency activity of interest (delta and theta), the continuous EEG signals were band-pass filtered between 1 and 8&#x2009;Hz with a zero phase-shift Chebyshev filter. The filtered signals were then segmented into a series of 2,560-ms long epochs. Each epoch started 400&#x2009;ms before the utterance onset (corresponding to the pre-stimulus baseline), and contained a 2,160&#x2009;ms long post-stimulus interval (corresponding to the duration of the shortest sentence). All epochs were submitted to a three-stage rejection process to exclude the contaminated ones: (1) Epochs with peak-to-peak amplitude exceeding 150&#x2009;&#x03BC;V were rejected. (2) Epochs with a standard deviation (SD) higher than 3 times the mean SD of all non-rejected epochs, or lower than one-third the mean SD were rejected. (3) The remaining epochs were visually inspected to remove any residual artifacts. Participants who had less than 20 remaining epochs in a given condition after epoch rejection were excluded. From the 35 participants who contributed EEG and CDI data, 6 were excluded due to bad data quality resulting in an insufficient number of non-rejected epochs in one of the language conditions of interest (French and English). Therefore, 29 participants contributed good quality EEG data for the French and English conditions (<xref ref-type="supplementary-material" rid="SM1">Supplementary Table S1</xref>). The included participants contributed on average 41 epochs (SD: 13.14; range: 20&#x2013;79) for French, and 35 epochs (SD: 10.69; range: 20&#x2013;62) for English. The number of non-rejected epochs from the 29 participants were submitted to a paired samples t-test (two-tail), and it yielded no significant differences between the two language conditions [<italic>p</italic>&#x2009;=&#x2009;0.082].</p>
<p>The non-rejected epochs were subjected to time-frequency analysis to uncover stimulus-evoked oscillatory responses using the Matlab&#x00AE; toolbox &#x2018;WTools&#x2019; (<xref ref-type="bibr" rid="ref38">Parise and Csibra, 2013</xref>). With this toolbox, a continuous wavelet transform of each non-rejected epoch was performed using Morlet wavelets (number of cycles 3.5) at 1&#x2009;Hz intervals in the 1&#x2013;8&#x2009;Hz range. The full pipeline is described in detail in (<xref ref-type="bibr" rid="ref4">Csibra et al., 2000</xref>; <xref ref-type="bibr" rid="ref38">Parise and Csibra, 2013</xref>). Briefly, complex Morlet Wavelets are computed at steps of 1&#x2009;Hz with a sigma of 3.5. The real and the imaginary parts of the wavelets are computed separately as cos and sin components, respectively. The signal is then convoluted with each wavelet. The absolute value of each complex coefficient is then computed. This process resulted in a time-frequency map of spectral amplitude values (not power) per epoch.</p>
<p>Time-frequency transformed epochs were then averaged for French and English separately. To remove the distortion introduced by the wavelet transform, the first and last 200&#x2009;ms of the averaged epochs were removed, resulting in 2,160&#x2009;ms long segments, including 200&#x2009;ms before and 1,960&#x2009;ms after stimulus onset. The averaged epochs were then baseline corrected using the mean amplitude of the 200&#x2009;ms pre-stimulus window as baseline, subtracting it from the whole epoch at each frequency. This process resulted in a time-frequency map of spectral amplitude values per condition and channel, at the participant level. The group mean (29 participants) of these time-frequency maps for channel F4 is presented in <xref ref-type="fig" rid="fig3">Figure 3A</xref> as an example.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Neural activation during speech processing at birth. <bold>(A)</bold> Average time-frequency response to French and English at channel F4. The time-frequency maps illustrate the mean spectral amplitude per condition from 1 to 8&#x2009;Hz. The color bar to the right of the figure shows the spectral amplitude scale of the maps. <bold>(B)</bold> P-map obtained by submitting the time-frequency responses to French and English to paired-samples t-tests (two-tailed). <bold>(C)</bold> Time-frequency regions where the absolute <italic>T</italic>-values exceed the critical threshold (|<italic>T</italic>-value|&#x2009;&#x003E;&#x2009;2.048). Red regions indicate higher activation for French, while blue regions indicate higher activation for English. The dashed rectangular box indicates the cluster exhibiting significant differences between French and English at channel F4.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g003.tif"/>
</fig>
<p>Language discrimination between French (the native language) and English (the rhythmically different unfamiliar language) was assessed by submitting the spectral amplitude values from their time-frequency responses to paired-samples t-tests (two-tailed). <xref ref-type="fig" rid="fig3">Figure 3B</xref> displays the <italic>P</italic>-map for this analysis in channel F4, and <xref ref-type="fig" rid="fig3">Figure 3C</xref> highlights the time-frequency regions where the absolute <italic>T</italic>-values for this comparison exceed the critical threshold (|<italic>T</italic>-value|&#x2009;&#x003E;&#x2009;2.048). Cluster-level statistics were calculated, and nonparametric statistical testing was performed by calculating the <italic>p</italic>-value of the clusters under the permutation distribution (<xref ref-type="bibr" rid="ref25">Maris and Oostenveld, 2007</xref>), which was obtained by permuting the language labels in the original dataset 1,000 times. The sample size for these analyses was 29 participants.</p>
<p>Once significant clusters, i.e., time-frequency regions where neural responses to French and English are significantly different, had been identified (<xref ref-type="fig" rid="fig3">Figure 3C</xref>), the mean spectral amplitude in the cluster&#x2019;s region was computed for each language separately. A neural measure of language discrimination was obtained by calculating the mean amplitude difference between the two language conditions (French &#x2013; English) in the region of the significant cluster. This process yielded one discrimination measure per participant, which represents the candidate predictor of later language skills.</p>
</sec>
<sec id="sec8">
<label>2.6</label>
<title>Predicting language outcome</title>
<p>Measures of language development were obtained from the CDI questionnaires collected at 12 and 18&#x2009;months. Receptive vocabulary was assessed as the number of <italic>words understood</italic>, and expressive vocabulary was assessed as the number of <italic>words produced</italic> at each given age. Data from one infant were removed from analysis because expressive vocabulary at 12&#x2009;months was larger than 3 SDs above the mean of the same score in the group.</p>
<p>To investigate the predictive role of language discriminations at birth on later language development, I conducted a path analysis considering the neural activation difference between French and English as the independent variable, and vocabulary measures at 12 and 18&#x2009;months (words understood and words produced) as dependent variables. <xref ref-type="fig" rid="fig4">Figure 4</xref> depicts the relationships that were assessed. Additionally, to evaluate whether CDI data reliably tracks infants&#x2019; vocabulary growth, the predictive role that vocabulary measures at 12&#x2009;months have on vocabulary measures at 18&#x2009;months was also evaluated. Three hypothesis were tested here: (i) neural data at birth can predict vocabulary skills at 12&#x2009;months; (ii) neural data at birth can predict vocabulary skills at 18&#x2009;months; (iii) vocabulary skills at 12&#x2009;months can predict vocabulary skills at 18&#x2009;months. Two comparisons evaluated each hypothesis: one predicting the number of words understood and another one the number of words produced. The Bonferroni correction was applied to adjust the original alpha value (<italic>&#x03B1;</italic>&#x2009;=&#x2009;0.05) and correct for the multiple comparisons evaluating the same hypothesis (<italic>n</italic>&#x2009;=&#x2009;2). This resulted in the adjusted alpha value (<italic>&#x03B1;</italic>&#x2009;=&#x2009;0.025), which was used to evaluate the obtained results. To test for outliers, data&#x2019;s residuals and influential cases were investigated. Residuals were evaluated by assessing heteroskedasticity with the White test and the Breusch-Pagan test. To identify possible influential cases, Cook&#x2019;s distance and leverage values were computed.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Diagram of path analysis assessing the relationship between language measures from birth to 18&#x2009;months. Models 1 to 4 assess the predictive role of language discrimination at birth on vocabulary measures at 12 and 18&#x2009;months. Models 5 and 6 assess the predictive relationship between vocabulary measures at 12 and 18&#x2009;months. The single-ended arrows represent the predictive relationships under evaluation, and the double-ended arrows illustrate the non-causal relationships between variables (correlation). The solid black arrows illustrate significant relationships, while dashed gray arrows illustrate non-significant relationships.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g004.tif"/>
</fig>
<p>Additionally, to assess the relationship between language skills at a given age, Pearson&#x2019;s correlation coefficients (two-tailed) were computed between the number of words understood and the number of words produced at 12 and 18&#x2009;months, separately. All statistical analyses were carried out with SPSS 29 (IBM).</p>
</sec>
</sec>
<sec sec-type="results" id="sec9">
<label>3</label>
<title>Results</title>
<sec id="sec10">
<label>3.1</label>
<title>EEG data analysis</title>
<p>A time-frequency response to French and English was obtained for the 29 participants who contributed at least 20 non-rejected epochs per condition. <xref ref-type="fig" rid="fig3">Figure 3A</xref> presents the group mean time-frequency maps for the two conditions at channel F4. Neural activation differences between French and English were assessed by submitting their time-frequency responses to permutation testing involving paired-samples t-tests (two-tailed). <xref ref-type="fig" rid="fig3">Figure 3B</xref> presents the P-map for this comparison, and <xref ref-type="fig" rid="fig3">Figure 3C</xref> highlights the time-frequency regions where differences take place at channel F4. <xref ref-type="supplementary-material" rid="SM1">Supplementary Figure S4</xref> presents the results for all channels.</p>
<p>A significant cluster revealing neural activation differences between French and English was found at channel F4 ranging from 4 to 5&#x2009;Hz [<italic>t</italic> (28)&#x2009;=&#x2009;862,17; <italic>p</italic>&#x2009;=&#x2009;0.02]. In the cluster region, neural responses exhibit higher activation for French (the native language) than for English (the rhythmically different unfamiliar language), mainly at 5&#x2009;Hz, during the first half of the sentences. The maximum effect size, partial eta-squared (n<sub>p</sub><sup>2</sup>), for this significant cluster in channel F4 is 0.9794. These results were obtained for a subset of participants (<italic>n</italic>&#x2009;=&#x2009;29) from the original publication (<italic>n</italic>&#x2009;=&#x2009;40) investigating neural oscillations at birth (<xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>), therefore they reveal the same findings: theta activity in the human newborn brain is sensitive to rhythmic differences across languages as it can successfully distinguish between the rhythmically different languages, English, a stress-timed language, and French, a syllable-timed language (<xref ref-type="bibr" rid="ref44">Ramus et al., 1999</xref>).</p>
<p>The language discrimination measure, defined as the difference in neural activation between French and English, ranged from &#x2212;0.422 to 0.896 (mean&#x2009;=&#x2009;0.204, SD&#x2009;=&#x2009;0.298). <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S3</xref> presents the language discrimination measure (Discrimination_Theta_F4_0m) for the 29 included participants.</p>
</sec>
<sec id="sec11">
<label>3.2</label>
<title>Predicting language outcome</title>
<p>Measures of language development were obtained by collecting information about children&#x2019;s receptive and expressive vocabulary at 12 and 18&#x2009;months. <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S3</xref> presents the vocabulary measures (words understood, words produced) for the 29 included participants.</p>
<p>A path analysis was conducted to evaluate the predictive relationship between language discrimination at birth and language skills at 12 and 18&#x2009;months. Additionally, the predictive relationship between language measures at 12 and 18&#x2009;months was also evaluated to assess infant&#x2019;s vocabulary growth. <xref ref-type="table" rid="tab1">Table 1</xref> presents the results of the linear regression models, and <xref ref-type="fig" rid="fig4">Figure 4</xref> depicts the standardized estimates of the path coefficients.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Regression models assessing the prediction of language skills at 12 and 18&#x2009;months.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top"><bold>Model</bold></th>
<th align="center" valign="top"><bold>Dependent variable</bold></th>
<th align="center" valign="top"><bold><italic>R</italic></bold></th>
<th align="center" valign="top"><bold><italic>R</italic> square</bold></th>
<th align="center" valign="top"><bold>df</bold></th>
<th align="center" valign="top"><bold><italic>F</italic></bold></th>
<th align="center" valign="top"><bold>Sig</bold></th>
<th align="center" valign="top"><bold>Independent variable</bold></th>
<th align="center" valign="top"><bold>Beta</bold></th>
<th align="center" valign="top"><bold>Sig</bold></th>
<th align="center" valign="top"><bold>Sample size</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top">1</td>
<td align="center" valign="top">Comprehension_12m</td>
<td align="center" valign="top">0.484</td>
<td align="center" valign="top">0.234</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">6.110</td>
<td align="center" valign="top"><bold>0.023&#x002A;</bold></td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle"><bold>0.484&#x002A;</bold></td>
<td align="center" valign="middle"><bold>0.023&#x002A;</bold></td>
<td align="center" valign="middle">22</td>
</tr>
<tr>
<td align="center" valign="top">2</td>
<td align="center" valign="top">Production_12m</td>
<td align="center" valign="top">0.147</td>
<td align="center" valign="top">0.022</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.444</td>
<td align="center" valign="top">0.513</td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle">0.147</td>
<td align="center" valign="middle">0.513</td>
<td align="center" valign="middle">22</td>
</tr>
<tr>
<td align="center" valign="top">3</td>
<td align="center" valign="top">Comprehension_18m</td>
<td align="center" valign="top">0.408</td>
<td align="center" valign="top">0.167</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">4.996</td>
<td align="center" valign="top">0.035</td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle">0.408</td>
<td align="center" valign="middle">0.035</td>
<td align="center" valign="middle">27</td>
</tr>
<tr>
<td align="center" valign="top">4</td>
<td align="center" valign="top">Production_18m</td>
<td align="center" valign="top">0.303</td>
<td align="center" valign="top">0.092</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">2.534</td>
<td align="center" valign="top">0.124</td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle">0.303</td>
<td align="center" valign="middle">0.124</td>
<td align="center" valign="middle">27</td>
</tr>
<tr>
<td align="center" valign="top">5</td>
<td align="center" valign="top">Comprehension_18m</td>
<td align="center" valign="top">0.725</td>
<td align="center" valign="top">0.525</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">9.398</td>
<td align="center" valign="top"><bold>0.002&#x002A;</bold></td>
<td align="center" valign="middle">Comprehension_12m; Production_12m</td>
<td align="center" valign="middle"><bold>0.710</bold>; <break/>0.033</td>
<td align="center" valign="middle"><bold>0.001&#x002A;</bold>; <break/>0.859</td>
<td align="center" valign="middle">20</td>
</tr>
<tr>
<td align="center" valign="top">6</td>
<td align="center" valign="top">Production_18m</td>
<td align="center" valign="top">0.741</td>
<td align="center" valign="top">0.549</td>
<td align="center" valign="top">2</td>
<td align="center" valign="top">10.366</td>
<td align="center" valign="top"><bold>0.001&#x002A;</bold></td>
<td align="center" valign="middle">Comprehension_12m; Production_12m</td>
<td align="center" valign="middle">0.382;<break/> <bold>0.491</bold></td>
<td align="center" valign="middle">0.049; <break/><bold>0.015&#x002A;</bold></td>
<td align="center" valign="middle">20</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Models 1 to 4 use language discrimination at birth as predictor, while models 5 and 6 explore the predictive relationship between vocabulary measures at 12 and 18&#x2009;months.</p>
<p>The alpha value for these tests is <italic>&#x03B1;</italic> =&#x2009;0.025.</p>
</table-wrap-foot>
</table-wrap>
<p>When evaluating the predictive role of language discrimination at birth, a significant path coefficient was found for language comprehension at 12&#x2009;months (Beta&#x2009;=&#x2009;0.484, <italic>p</italic>&#x2009;=&#x2009;0.023, model 1). This significant linear relationship is illustrated in <xref ref-type="fig" rid="fig5">Figure 5A</xref>. In contrast, language discrimination at birth did not predict production skills at 12&#x2009;months (Beta&#x2009;=&#x2009;0.147; <italic>p</italic>&#x2009;=&#x2009;0.513, model 2), nor language comprehension at 18&#x2009;months (Beta&#x2009;=&#x2009;0.408; <italic>p</italic>&#x2009;=&#x2009;0.035, model 3), nor language production at 18&#x2009;months (Beta&#x2009;=&#x2009;0.303; <italic>p</italic>&#x2009;=&#x2009;0.124, model 4). <xref ref-type="fig" rid="fig5">Figures 5B</xref>&#x2013;<xref ref-type="fig" rid="fig5">D</xref> illustrate the non-significant linear regressions evaluated for language production at 12&#x2009;months, as well as for language comprehension and production at 18&#x2009;months, respectively. When evaluating for outliers, the model assessing the prediction of production skills at 18&#x2009;months (model 4) exhibited heteroskedasticity according to Breusch-Pagan test (Chi-Square&#x2009;=&#x2009;4.924, <italic>p</italic>&#x2009;=&#x2009;0.026). Additionally, 3 influential cases were identified in the models assessing the prediction of language skills at 18&#x2009;months (models 3 and 4) due to having leverage values greater than twice the average (leverage values&#x2009;=&#x2009;0.21, 0.16, and 0.19; average value&#x2009;=&#x2009;0.07). <xref ref-type="supplementary-material" rid="SM1">Supplementary Table S3</xref> highlights the influential cases in red, and <xref ref-type="fig" rid="fig5">Figures 5C</xref>,<xref ref-type="fig" rid="fig5">D</xref> identifies them with blue circles. As post-hoc analyses, the 3 influential cases were removed and regression models were re-calculated (<xref ref-type="table" rid="tab2">Table 2</xref>, models 3&#x2032; and 4&#x2032;). Language discrimination at birth was found to significantly predict language comprehension (Beta&#x2009;=&#x2009;0.491; <italic>p</italic>&#x2009;=&#x2009;0.015, model 3&#x2032;) and language production (Beta&#x2009;=&#x2009;0.482; <italic>p</italic>&#x2009;=&#x2009;0.017, model 4&#x2032;) at 18&#x2009;months, after the 3 influential cases were removed. <xref ref-type="fig" rid="fig5">Figures 5E</xref>,<xref ref-type="fig" rid="fig5">F</xref> illustrate how these linear regressions, excluding the influential cases, predict language skills at 18&#x2009;months (models 3&#x2032; and 4&#x2032;).</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Linear regression between language discrimination at birth and: <bold>(A)</bold> language comprehension at 12&#x2009;months, <bold>(B)</bold> language production at 12&#x2009;months, <bold>(C)</bold> language comprehension at 18&#x2009;months, and <bold>(D)</bold> language production at 18&#x2009;months. Panels <bold>(E)</bold> and <bold>(F)</bold> illustrate the linear regressions from <bold>(C)</bold> and <bold>(D)</bold> respectively, after removing the outliers highlighted with blue circles.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g005.tif"/>
</fig>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p><italic>Post-hoc</italic> regression models.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="top"><bold>Model</bold></th>
<th align="center" valign="top"><bold>Dependent variable</bold></th>
<th align="center" valign="top"><bold><italic>R</italic></bold></th>
<th align="center" valign="top"><bold><italic>R</italic> square</bold></th>
<th align="center" valign="top"><bold>df</bold></th>
<th align="center" valign="top"><bold><italic>F</italic></bold></th>
<th align="center" valign="top"><bold>Sig</bold></th>
<th align="center" valign="top"><bold>Independent variable</bold></th>
<th align="center" valign="top"><bold>Beta</bold></th>
<th align="center" valign="top"><bold>Sig</bold></th>
<th align="center" valign="top"><bold>Sample size</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="top">3&#x2032;</td>
<td align="center" valign="top">Comprehension_18m</td>
<td align="center" valign="top">0.491</td>
<td align="center" valign="top">0.241</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">6.988</td>
<td align="center" valign="top"><bold>0.015&#x002A;</bold></td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle"><bold>0.491</bold></td>
<td align="center" valign="middle"><bold>0.015&#x002A;</bold></td>
<td align="center" valign="middle">24</td>
</tr>
<tr>
<td align="center" valign="top">4&#x2032;</td>
<td align="center" valign="top">Production_18m</td>
<td align="center" valign="top">0.482</td>
<td align="center" valign="top">0.233</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">6.676</td>
<td align="center" valign="top"><bold>0.017&#x002A;</bold></td>
<td align="center" valign="middle">Discrimination_0m</td>
<td align="center" valign="middle"><bold>0.482</bold></td>
<td align="center" valign="middle"><bold>0.017&#x002A;</bold></td>
<td align="center" valign="middle">24</td>
</tr>
<tr>
<td align="center" valign="top">5&#x2032;</td>
<td align="center" valign="top">Comprehension_18m</td>
<td align="center" valign="top">0.724</td>
<td align="center" valign="top">0.524</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">19.830</td>
<td align="center" valign="top"><bold>&#x003C;0.001&#x002A;</bold></td>
<td align="center" valign="middle">Comprehension_12m</td>
<td align="center" valign="middle"><bold>0.724</bold></td>
<td align="center" valign="middle"><bold>&#x003C;0.001&#x002A;</bold></td>
<td align="center" valign="middle">20</td>
</tr>
<tr>
<td align="center" valign="top">6&#x2032;</td>
<td align="center" valign="top">Production_18m</td>
<td align="center" valign="top">0.656</td>
<td align="center" valign="top">0.431</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">13.622</td>
<td align="center" valign="top"><bold>0.002&#x002A;</bold></td>
<td align="center" valign="middle">Production_12m</td>
<td align="center" valign="middle"><bold>0.656</bold></td>
<td align="center" valign="middle"><bold>0.002&#x002A;</bold></td>
<td align="center" valign="middle">20</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Models 3&#x2032; and 4&#x2032; represent post-hoc linear regressions of models 3 and 4 (<xref ref-type="table" rid="tab1">Table 1</xref>) after removing 3 influential cases.</p>
<p>Model 5&#x2032; and 6&#x2032; represent post-hoc linear regressions of models 5 and 6 (<xref ref-type="table" rid="tab1">Table 1</xref>) after removing the non-significant predictors.</p>
<p>The alpha value for these tests is <italic>&#x03B1;</italic>&#x2009;=&#x2009;0.025.</p>
</table-wrap-foot>
</table-wrap>
<p>To assess whether language abilities at 12&#x2009;months are representative of the developmental path that language acquisition will follow, I assessed the predictive relationship between vocabulary measures at 12 and 18&#x2009;months. The results show that language comprehension at 18&#x2009;months is significantly predicted by language comprehension at 12&#x2009;months (Beta&#x2009;=&#x2009;0.710; <italic>p</italic>&#x2009;=&#x2009;0.001, model 5), but not by language production at 12&#x2009;months (Beta&#x2009;=&#x2009;0.033; <italic>p</italic>&#x2009;=&#x2009;0.859, model 5). Similarly, language production at 18&#x2009;months is significantly predicted by language production at 12&#x2009;months (Beta&#x2009;=&#x2009;0.491; <italic>p</italic>&#x2009;=&#x2009;0.015, model 6), but not by language comprehension at 12&#x2009;months (Beta&#x2009;=&#x2009;0.382; <italic>p</italic>&#x2009;=&#x2009;0.049, model 6). <xref ref-type="table" rid="tab2">Table 2</xref> presents post-hoc regression analyses (models 5&#x2032; and 6&#x2032;) removing the non-significant predictors from models 5 and 6 (<xref ref-type="table" rid="tab1">Table 1</xref>). <xref ref-type="fig" rid="fig6">Figures 6A</xref>,<xref ref-type="fig" rid="fig6">B</xref> illustrate the significant linear relationship between vocabulary measures at 12 and 18&#x2009;months. Furthermore, <xref ref-type="fig" rid="fig6">Figures 6C</xref>,<xref ref-type="fig" rid="fig6">D</xref> illustrate the developmental trajectories for word comprehension and word production respectively, exhibiting a vocabulary growth that is consistent across participants. These results confirm that CDI questionnaires provided reliable measures of language growth in this sample.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Panels <bold>(A)</bold> and <bold>(B)</bold> illustrate the linear relationship between vocabulary measures at 12 and 18&#x2009;months, while <bold>(C)</bold> and <bold>(D)</bold> illustrate the trajectory of vocabulary growth from 12 to 18&#x2009;months at the participant level.</p>
</caption>
<graphic xlink:href="fnhum-18-1370572-g006.tif"/>
</fig>
<p>When evaluating the relationship between language skills at 12 and 18&#x2009;months, a significant positive correlation was observed between language comprehension and production at 18&#x2009;months (r&#x2009;=&#x2009;0.497, <italic>p</italic>&#x2009;=&#x2009;0.008, <italic>n</italic>&#x2009;=&#x2009;27), but not between vocabulary measures at 12&#x2009;months (<italic>r</italic>&#x2009;=&#x2009;0.310, <italic>p</italic>&#x2009;=&#x2009;0.160, <italic>n</italic>&#x2009;=&#x2009;22). These non-predictive relationships are depicted in <xref ref-type="fig" rid="fig4">Figure 4</xref> with double-sided arrows.</p>
</sec>
</sec>
<sec sec-type="discussion" id="sec12">
<label>4</label>
<title>Discussion</title>
<p>The current study investigated whether a neural measure of language discrimination at birth, defined as the neural activation difference found when processing the prenatally heard language (French) and a rhythmically different unfamiliar language (English), could be used as predictor of language outcome. Results revealed that differences in theta activity at birth, claimed to reflect rhythmic discrimination of French and English predict language comprehension at 12&#x2009;months. Furthermore, post-hoc analyses after removing 3 outliers from the vocabulary data at 18&#x2009;months revealed that language discrimination at birth also predicts language comprehension and production at 18&#x2009;months. These findings suggest that the ability to recognize the native language and discriminate it from a rhythmically different unfamiliar language at birth can predict later language development.</p>
<p>When newborns discriminate their native language from a rhythmically different unfamiliar language, they perform two tasks: (1) they discriminate the acoustic features that differentiate both rhythmic classes, and (2) they recognize the features of their native language (heard <italic>in utero</italic>). Therefore, a language discrimination task involving the native language, is different from a discrimination task involving two unfamiliar languages (<xref ref-type="bibr" rid="ref1">Byers-Heinlein et al., 2010</xref>). Here, newborns discriminated their native language (French) from a rhythmically different unfamiliar language (English). This discrimination was reflected by activation differences in the theta band such that, at the group level, higher theta activation was exhibited for French that for English. Such activation differences could have been originated from different activation profiles: (i) activation for French and no activation for English, (ii) no activation for French and suppression for English, or (iii) activation for French and English, with higher activation for French. Findings from my previous study investigating neural oscillations during speech processing at birth, using a superset of the current dataset (<xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>), revealed that theta activity during French and English processing was higher than at rest, pointing in the direction of situation (iii), where activation for both languages takes place, and differences originate from higher activation to French. This supports the hypothesis that the modulation of theta activity might be one way for the newborn brain to encode speech rhythm (regardless of language familiarity), aiding in the discrimination of rhythmically different languages, and the recognition of the native language (<xref ref-type="bibr" rid="ref36">Ortiz Barajas et al., 2021</xref>; <xref ref-type="bibr" rid="ref35">Ortiz Barajas and Gervain, 2021</xref>; <xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>).</p>
<p>Furthermore, theta activity in the newborn brain has also been found to exhibit increased long-range temporal correlations after stimulation with the prenatally heard language, indicating the early emergence of brain specialization for the native language (<xref ref-type="bibr" rid="ref24">Mariani et al., 2023</xref>). If stronger theta activation for French (as compared to English) reflects brain specialization for the native language, a discrimination measure reflecting this activation difference should predict infants&#x2019; later language abilities. Results from the current study revealed that larger discrimination measures at birth predict higher vocabulary measures at 12 and 18&#x2009;months, while lower discrimination measures predict lower later language skills. These findings suggest that language discrimination at birth represents an early measure of neural commitment to the native language that predicts its later developmental trajectory. Theta activity has been argued to support the processing of syllabic units in adults (<xref ref-type="bibr" rid="ref8">Ghitza and Greenberg, 2009</xref>). Findings from infant studies point in the same direction, as theta activity has been found to underlie language discrimination (<xref ref-type="bibr" rid="ref33">Nacar Garcia et al., 2018</xref>; <xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>), suggesting that it might encode speech rhythm. Additionally, theta activity in the infant brain has also been found to synchronize to the speech envelope (<xref ref-type="bibr" rid="ref36">Ortiz Barajas et al., 2021</xref>), and the speech envelope carries rhythm (<xref ref-type="bibr" rid="ref46">Rosen, 1992</xref>). Since both the speech envelope and rhythm correlate with syllabic rate (<xref ref-type="bibr" rid="ref60">Varnet et al., 2017</xref>; <xref ref-type="bibr" rid="ref62">Zhang et al., 2023</xref>), it is reasonable to suggest that theta activity might encode syllabic units, and rhythm, by extracting relevant features from the speech envelope already at birth. If this is the case, the predictive power of the language discrimination measure at birth could be due to theta activity favoring the encoding of syllables in French (a syllable-timed language), which in turn would favor later word learning. This claim is supported by previous studies showing that tracking of stressed syllables at 10&#x2009;month (<xref ref-type="bibr" rid="ref27">Menn et al., 2022</xref>) and learning of disyllabic words at birth (<xref ref-type="bibr" rid="ref54">Suppanen et al., 2022</xref>) predict language abilities at 2&#x2009;years. These results taken together suggest that syllable encoding supports word-segmentation and word learning, which in turn support language development. Newborns have been shown to have a universal sensitivity to syllables (<xref ref-type="bibr" rid="ref47">Sansavini et al., 1997</xref>; <xref ref-type="bibr" rid="ref35">Ortiz Barajas and Gervain, 2021</xref>), however, it cannot be established whether the larger theta activity observed here on prenatally French-exposed newborns reflects good encoding of syllabic units due to this inherent (universal) ability, or whether prenatal experience with French (a syllable-timed language) has strengthened this sensitivity. Future research testing the same stimuli on prenatally English-exposed newborns (English being a stress-timed language) will shed light on the role of theta activity on syllable encoding at birth.</p>
<p>When exploring the predictive role of language discrimination at birth on later language skills, a significant linear relationship was found with language comprehension at 12&#x2009;months, as well as with language comprehension and production at 18&#x2009;months (after removing outliers). These results (<xref ref-type="fig" rid="fig4">Figure 4</xref>) depict a language trajectory that is coherent and consistent along development: language scores at any given age predict language scores at a subsequent age. However, one exception was found for language production at 12&#x2009;months, which was not predicted by language discrimination at birth. This could be because at 12&#x2009;months, language production is at its very beginning (<xref ref-type="fig" rid="fig2">Figure 2</xref>) and individual variability is low (<xref ref-type="fig" rid="fig5">Figures 5B</xref>, <xref ref-type="fig" rid="fig6">6D</xref>). This suggests that measuring language production at 12&#x2009;months is too early to describe the language developmental trajectory of each individual. This is supported by the fact that language production at 12&#x2009;months is not correlated with language comprehension at the same age, which on the contrary, does describe the language trajectory of participants. However, language production undergoes an accelerated growth around 18&#x2009;months (vocabulary spurt) (<xref ref-type="bibr" rid="ref17">Kuhl, 2007</xref>), and becomes a better indicator of the language trajectory, as it correlates with language comprehension at the same age, and it can be predicted by language discrimination at birth.</p>
<p>In summary, the current study revealed a predictive relationship between a measure of theta activity during language discrimination at birth and later language outcome that merits further exploration and confirmation in future studies. These results point toward a developmental scenario in accordance with theoretical predictions as well as empirical findings: prenatal experience with speech mainly consists of language prosody, as maternal tissues filter out the higher frequencies, but preserve the low-frequency components that carry prosody (<xref ref-type="bibr" rid="ref40">Pujol et al., 1991</xref>). Having experience with the prosody of their mother&#x2019;s language, allows newborns to identify it and discriminate it from other rhythmically different languages at birth. Low frequency neural activity (delta and theta) has been found to support speech processing at birth, and to reflect rhythmic language discrimination, suggesting that it reflects the processing of prosody (<xref ref-type="bibr" rid="ref37">Ortiz-Barajas et al., 2023</xref>). Considering the relevance of low frequency neural activity in speech processing at birth, as well as in adulthood (<xref ref-type="bibr" rid="ref9">Giraud and Poeppel, 2012</xref>; <xref ref-type="bibr" rid="ref28">Meyer, 2018</xref>), it is reasonable to hypothesize that it has a central role in language acquisition, as not only it describes speech processing at the time of measurement, it also seems to describe the language developmental trajectory a child might follow.</p>
</sec>
<sec sec-type="data-availability" id="sec13">
<title>Data availability statement</title>
<p>The processed EEG data that support the findings of this study have been deposited in the OSF repository <ext-link xlink:href="https://osf.io/4w69p" ext-link-type="uri">https://osf.io/4w69p</ext-link>.</p>
</sec>
<sec sec-type="ethics-statement" id="sec14">
<title>Ethics statement</title>
<p>The studies involving humans were approved by the CER Paris Descartes ethics committee of the Paris Descartes University (currently, Universit&#x00E9; Paris Cit&#x00E9;). The studies were conducted in accordance with the local legislation and institutional requirements. Written informed consent for participation in this study was provided by the participants&#x2019; legal guardians/next of kin.</p>
</sec>
<sec sec-type="author-contributions" id="sec15">
<title>Author contributions</title>
<p>MCO-B: Investigation, Formal analysis, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="sec16">
<title>Funding</title>
<p>The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work and publication have received the financial support of the AcAMEL project, financed by the Nouvelle Aquitaine Region and the CAPB (Communaut&#x00E9; d&#x2019;Agglom&#x00E9;ration du Pays Basque).</p>
</sec>
<ack>
<p>I would like to thank the Robert Debr&#x00E9; Hospital for providing access to the newborns, and all the families and their babies for their participation in this study. I sincerely acknowledge Lucie Martin and Anouche Banikyan for their help with infant testing; as well as Judit Gervain for valuable comments on the analysis and the manuscript.</p>
</ack>
<sec sec-type="COI-statement" id="sec17">
<title>Conflict of interest</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="sec18">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="sec19">
<title>Supplementary material</title>
<p>The Supplementary material for this article can be found online at: <ext-link xlink:href="https://www.frontiersin.org/articles/10.3389/fnhum.2024.1370572/full#supplementary-material" ext-link-type="uri">https://www.frontiersin.org/articles/10.3389/fnhum.2024.1370572/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Byers-Heinlein</surname> <given-names>K.</given-names></name> <name><surname>Burns</surname> <given-names>T. C.</given-names></name> <name><surname>Werker</surname> <given-names>J. F.</given-names></name></person-group> (<year>2010</year>). <article-title>The roots of bilingualism in newborns</article-title>. <source>Psychol. Sci.</source> <volume>21</volume>, <fpage>343</fpage>&#x2013;<lpage>348</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0956797609360758</pub-id>, PMID: <pub-id pub-id-type="pmid">20424066</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cantiani</surname> <given-names>C.</given-names></name> <name><surname>Riva</surname> <given-names>V.</given-names></name> <name><surname>Piazza</surname> <given-names>C.</given-names></name> <name><surname>Bettoni</surname> <given-names>R.</given-names></name> <name><surname>Molteni</surname> <given-names>M.</given-names></name> <name><surname>Choudhury</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Auditory discrimination predicts linguistic outcome in Italian infants with and without familial risk for language learning impairment</article-title>. <source>Dev. Cogn. Neurosci.</source> <volume>20</volume>, <fpage>23</fpage>&#x2013;<lpage>34</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.dcn.2016.03.002</pub-id>, PMID: <pub-id pub-id-type="pmid">27295127</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cristia</surname> <given-names>A.</given-names></name> <name><surname>Seidl</surname> <given-names>A.</given-names></name> <name><surname>Junge</surname> <given-names>C.</given-names></name> <name><surname>Soderstrom</surname> <given-names>M.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Predicting individual variation in language from infant speech perception measures</article-title>. <source>Child Dev.</source> <volume>85</volume>, <fpage>1330</fpage>&#x2013;<lpage>1345</lpage>. doi: <pub-id pub-id-type="doi">10.1111/cdev.12193</pub-id>, PMID: <pub-id pub-id-type="pmid">24320112</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Csibra</surname> <given-names>G.</given-names></name> <name><surname>Davis</surname> <given-names>G.</given-names></name> <name><surname>Spratling</surname> <given-names>M. W.</given-names></name> <name><surname>Johnson</surname> <given-names>M. H.</given-names></name></person-group> (<year>2000</year>). <article-title>Gamma oscillations and object processing in the infant brain</article-title>. <source>Science</source> <volume>290</volume>, <fpage>1582</fpage>&#x2013;<lpage>1585</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.290.5496.1582</pub-id>, PMID: <pub-id pub-id-type="pmid">11090357</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Daikhin</surname> <given-names>L.</given-names></name> <name><surname>Raviv</surname> <given-names>O.</given-names></name> <name><surname>Ahissar</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Auditory stimulus processing and task learning are adequate in dyslexia, but benefits from regularities are reduced</article-title>. <source>J. Speech Lang. Hear. Res.</source> <volume>60</volume>, <fpage>471</fpage>&#x2013;<lpage>479</lpage>. doi: <pub-id pub-id-type="doi">10.1044/2016_JSLHR-H-16-0114</pub-id>, PMID: <pub-id pub-id-type="pmid">28114605</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ding</surname> <given-names>N.</given-names></name> <name><surname>Patel</surname> <given-names>A. D.</given-names></name> <name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Butler</surname> <given-names>H.</given-names></name> <name><surname>Luo</surname> <given-names>C.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>Temporal modulations in speech and music</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>81</volume>, <fpage>181</fpage>&#x2013;<lpage>187</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neubiorev.2017.02.011</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Floccia</surname> <given-names>C.</given-names></name> <name><surname>Sambrook</surname> <given-names>T. D.</given-names></name> <name><surname>Delle Luche</surname> <given-names>C.</given-names></name> <name><surname>Kwok</surname> <given-names>R.</given-names></name> <name><surname>Goslin</surname> <given-names>J.</given-names></name> <name><surname>White</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>I: INTRODUCTION</article-title>. <source>Monogr. Soc. Res. Child Dev.</source> <volume>83</volume>, <fpage>7</fpage>&#x2013;<lpage>29</lpage>. doi: <pub-id pub-id-type="doi">10.1111/mono.12348</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ghitza</surname> <given-names>O.</given-names></name> <name><surname>Greenberg</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>On the possible role of brain rhythms in speech perception: intelligibility of time-compressed speech with periodic and aperiodic insertions of silence</article-title>. <source>Phonetica</source> <volume>66</volume>:<fpage>113</fpage>. doi: <pub-id pub-id-type="doi">10.1159/000208934</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giraud</surname> <given-names>A.-L.</given-names></name> <name><surname>Poeppel</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Cortical oscillations and speech processing: emerging computational principles and operations</article-title>. <source>Nat. Neurosci.</source> <volume>15</volume>, <fpage>511</fpage>&#x2013;<lpage>517</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nn.3063</pub-id>, PMID: <pub-id pub-id-type="pmid">22426255</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gross</surname> <given-names>J.</given-names></name> <name><surname>Hoogenboom</surname> <given-names>N.</given-names></name> <name><surname>Thut</surname> <given-names>G.</given-names></name> <name><surname>Schyns</surname> <given-names>P.</given-names></name> <name><surname>Panzeri</surname> <given-names>S.</given-names></name> <name><surname>Belin</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Speech rhythms and multiplexed oscillatory sensory coding in the human brain</article-title>. <source>PLoS Biol.</source> <volume>11</volume>:<fpage>e1001752</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pbio.1001752</pub-id>, PMID: <pub-id pub-id-type="pmid">24391472</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Howlin</surname> <given-names>P.</given-names></name></person-group> (<year>2003</year>). <article-title>Outcome in high-functioning adults with autism with and without early language delays: implications for the differentiation between autism and Asperger syndrome</article-title>. <source>J. Autism Dev. Disord.</source> <volume>33</volume>, <fpage>3</fpage>&#x2013;<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.1023/A:1022270118899</pub-id>, PMID: <pub-id pub-id-type="pmid">12708575</pub-id></citation></ref>
<ref id="ref12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Junge</surname> <given-names>C.</given-names></name> <name><surname>Kooijman</surname> <given-names>V.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name> <name><surname>Cutler</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Rapid recognition at 10 months as a predictor of language development</article-title>. <source>Dev. Sci.</source> <volume>15</volume>, <fpage>463</fpage>&#x2013;<lpage>473</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1467-7687.2012.1144.x</pub-id>, PMID: <pub-id pub-id-type="pmid">22709396</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kern</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Lexicon development in French-speaking infants</article-title>. <source>First Lang.</source> <volume>27</volume>, <fpage>227</fpage>&#x2013;<lpage>250</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0142723706075789</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kooijman</surname> <given-names>V.</given-names></name> <name><surname>Junge</surname> <given-names>C.</given-names></name> <name><surname>Johnson</surname> <given-names>E. K.</given-names></name> <name><surname>Hagoort</surname> <given-names>P.</given-names></name> <name><surname>Cutler</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Predictive brain signals of linguistic development</article-title>. <source>Front. Psychol.</source> <volume>4</volume>, <fpage>1</fpage>&#x2013;<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00025</pub-id>, PMID: <pub-id pub-id-type="pmid">23404161</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2000</year>). <article-title>A new view of language acquisition</article-title>. <source>Proc. Natl. Acad. Sci.</source> <volume>97</volume>, <fpage>11850</fpage>&#x2013;<lpage>11857</lpage>. doi: <pub-id pub-id-type="doi">10.1073/pnas.97.22.11850</pub-id>, PMID: <pub-id pub-id-type="pmid">11050219</pub-id></citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2004</year>). <article-title>Early language acquisition: cracking the speech code</article-title>. <source>Nat. Rev. Neurosci.</source> <volume>5</volume>, <fpage>831</fpage>&#x2013;<lpage>843</lpage>. doi: <pub-id pub-id-type="doi">10.1038/nrn1533</pub-id>, PMID: <pub-id pub-id-type="pmid">15496861</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2007</year>). <article-title>Cracking the speech code: how infants learn language</article-title>. <source>Acoust. Sci. Technol.</source> <volume>28</volume>, <fpage>71</fpage>&#x2013;<lpage>83</lpage>. doi: <pub-id pub-id-type="doi">10.1250/ast.28.71</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P. K.</given-names></name> <name><surname>Conboy</surname> <given-names>B. T.</given-names></name> <name><surname>Coffey-Corina</surname> <given-names>S.</given-names></name> <name><surname>Padden</surname> <given-names>D.</given-names></name> <name><surname>Rivera-Gaxiola</surname> <given-names>M.</given-names></name> <name><surname>Nelson</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e)</article-title>. <source>Phil. Trans. Royal Soci. B</source> <volume>363</volume>, <fpage>979</fpage>&#x2013;<lpage>1000</lpage>. doi: <pub-id pub-id-type="doi">10.1098/rstb.2007.2154</pub-id>, PMID: <pub-id pub-id-type="pmid">17846016</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuhl</surname> <given-names>P.</given-names></name> <name><surname>Conboy</surname> <given-names>B.</given-names></name> <name><surname>Padden</surname> <given-names>D.</given-names></name> <name><surname>Nelson</surname> <given-names>T.</given-names></name> <name><surname>Pruitt</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Early speech perception and later language development: implications for the critical period</article-title>. <source>Lang. Learn. Dev.</source> <volume>1</volume>, <fpage>237</fpage>&#x2013;<lpage>264</lpage>. doi: <pub-id pub-id-type="doi">10.1207/s15473341lld0103&#x0026;4_2</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Leonard</surname> <given-names>L. B.</given-names></name></person-group> (<year>2014</year>). <source>Children with specific language impairment</source>. <publisher-loc>Cambridge, Massachusetts, U.S.</publisher-loc>: <publisher-name>The MIT Press</publisher-name>.</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lepp&#x00E4;nen</surname> <given-names>P. H. T.</given-names></name> <name><surname>H&#x00E4;m&#x00E4;l&#x00E4;inen</surname> <given-names>J. A.</given-names></name> <name><surname>Salminen</surname> <given-names>H. K.</given-names></name> <name><surname>Eklund</surname> <given-names>K. M.</given-names></name> <name><surname>Guttorm</surname> <given-names>T. K.</given-names></name> <name><surname>Lohvansuu</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>Newborn brain event-related potentials revealing atypical processing of sound frequency and the subsequent association with later literacy skills in children with familial dyslexia</article-title>. <source>Cortex</source> <volume>46</volume>, <fpage>1362</fpage>&#x2013;<lpage>1376</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cortex.2010.06.003</pub-id>, PMID: <pub-id pub-id-type="pmid">20656284</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lohvansuu</surname> <given-names>K.</given-names></name> <name><surname>H&#x00E4;m&#x00E4;l&#x00E4;inen</surname> <given-names>J. A.</given-names></name> <name><surname>Ervast</surname> <given-names>L.</given-names></name> <name><surname>Lyytinen</surname> <given-names>H.</given-names></name> <name><surname>Lepp&#x00E4;nen</surname> <given-names>P. H. T.</given-names></name></person-group> (<year>2018</year>). <article-title>Longitudinal interactions between brain and cognitive measures on reading development from 6 months to 14 years</article-title>. <source>Neuropsychologia</source> <volume>108</volume>, <fpage>6</fpage>&#x2013;<lpage>12</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2017.11.018</pub-id>, PMID: <pub-id pub-id-type="pmid">29157996</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lorusso</surname> <given-names>M. L.</given-names></name> <name><surname>Cantiani</surname> <given-names>C.</given-names></name> <name><surname>Molteni</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Age, dyslexia subtype and comorbidity modulate rapid auditory processing in developmental dyslexia</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<fpage>313</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2014.00313</pub-id>, PMID: <pub-id pub-id-type="pmid">24904356</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mariani</surname> <given-names>B.</given-names></name> <name><surname>Nicoletti</surname> <given-names>G.</given-names></name> <name><surname>Barzon</surname> <given-names>G.</given-names></name> <name><surname>Ortiz Barajas</surname> <given-names>M. C.</given-names></name> <name><surname>Shukla</surname> <given-names>M.</given-names></name> <name><surname>Guevara</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Prenatal experience with language shapes the brain. Science</article-title>. <source>Advances</source> <volume>9</volume>:<fpage>eadj3524</fpage>. doi: <pub-id pub-id-type="doi">10.1126/sciadv.adj3524</pub-id>, PMID: <pub-id pub-id-type="pmid">37992161</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maris</surname> <given-names>E.</given-names></name> <name><surname>Oostenveld</surname> <given-names>R.</given-names></name></person-group> (<year>2007</year>). <article-title>Nonparametric statistical testing of EEG- and MEG-data</article-title>. <source>J. Neurosci. Methods</source> <volume>164</volume>, <fpage>177</fpage>&#x2013;<lpage>190</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jneumeth.2007.03.024</pub-id>, PMID: <pub-id pub-id-type="pmid">17517438</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehler</surname> <given-names>J.</given-names></name> <name><surname>Jusczyk</surname> <given-names>P.</given-names></name> <name><surname>Lambertz</surname> <given-names>G.</given-names></name> <name><surname>Halsted</surname> <given-names>N.</given-names></name> <name><surname>Bertoncini</surname> <given-names>J.</given-names></name> <name><surname>Amiel-Tison</surname> <given-names>C.</given-names></name></person-group> (<year>1988</year>). <article-title>A precursor of language acquisition in young infants</article-title>. <source>Cognition</source> <volume>29</volume>, <fpage>143</fpage>&#x2013;<lpage>178</lpage>. doi: <pub-id pub-id-type="doi">10.1016/0010-0277(88)90035-2</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Menn</surname> <given-names>K. H.</given-names></name> <name><surname>Ward</surname> <given-names>E. K.</given-names></name> <name><surname>Braukmann</surname> <given-names>R.</given-names></name> <name><surname>Van Den Boomen</surname> <given-names>C.</given-names></name> <name><surname>Buitelaar</surname> <given-names>J.</given-names></name> <name><surname>Hunnius</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Neural tracking in infancy predicts language development in children with and without family history of autism</article-title>. <source>Neurobiol. Lang.</source> <volume>3</volume>, <fpage>495</fpage>&#x2013;<lpage>514</lpage>. doi: <pub-id pub-id-type="doi">10.1162/nol_a_00074</pub-id>, PMID: <pub-id pub-id-type="pmid">37216063</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meyer</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms</article-title>. <source>Eur. J. Neurosci.</source> <volume>48</volume>, <fpage>2609</fpage>&#x2013;<lpage>2621</lpage>. doi: <pub-id pub-id-type="doi">10.1111/ejn.13748</pub-id>, PMID: <pub-id pub-id-type="pmid">29055058</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mittag</surname> <given-names>M.</given-names></name> <name><surname>Larson</surname> <given-names>E.</given-names></name> <name><surname>Clarke</surname> <given-names>M.</given-names></name> <name><surname>Taulu</surname> <given-names>S.</given-names></name> <name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2021</year>). <article-title>Auditory deficits in infants at risk for dyslexia during a linguistic sensitive period predict future language</article-title>. <source>NeuroImage</source> <volume>30</volume>:<fpage>102578</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.nicl.2021.102578</pub-id>, PMID: <pub-id pub-id-type="pmid">33581583</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molfese</surname> <given-names>D. L.</given-names></name></person-group> (<year>2000</year>). <article-title>Predicting dyslexia at 8 years of age using neonatal brain responses</article-title>. <source>Brain Lang.</source> <volume>72</volume>, <fpage>238</fpage>&#x2013;<lpage>245</lpage>. doi: <pub-id pub-id-type="doi">10.1006/brln.2000.2287</pub-id>, PMID: <pub-id pub-id-type="pmid">10764519</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Molinaro</surname> <given-names>N.</given-names></name> <name><surname>Lizarazu</surname> <given-names>M.</given-names></name> <name><surname>Lallier</surname> <given-names>M.</given-names></name> <name><surname>Bourguignon</surname> <given-names>M.</given-names></name> <name><surname>Carreiras</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Out-of-synchrony speech entrainment in developmental dyslexia</article-title>. <source>Hum. Brain Mapp.</source> <volume>37</volume>, <fpage>2767</fpage>&#x2013;<lpage>2783</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hbm.23206</pub-id>, PMID: <pub-id pub-id-type="pmid">27061643</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moon</surname> <given-names>C.</given-names></name> <name><surname>Cooper</surname> <given-names>R. P.</given-names></name> <name><surname>Fifer</surname> <given-names>W. P.</given-names></name></person-group> (<year>1993</year>). <article-title>Two-day-olds prefer their native language</article-title>. <source>Infant Behav. Dev.</source> <volume>16</volume>, <fpage>495</fpage>&#x2013;<lpage>500</lpage>. doi: <pub-id pub-id-type="doi">10.1016/0163-6383(93)80007-U</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nacar Garcia</surname> <given-names>L.</given-names></name> <name><surname>Guerrero-Mosquera</surname> <given-names>C.</given-names></name> <name><surname>Colomer</surname> <given-names>M.</given-names></name> <name><surname>Sebastian-Galles</surname> <given-names>N.</given-names></name></person-group> (<year>2018</year>). <article-title>Evoked and oscillatory EEG activity differentiates language discrimination in young monolingual and bilingual infants</article-title>. <source>Sci. Rep.</source> <volume>8</volume>:<fpage>2770</fpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-018-20824-0</pub-id>, PMID: <pub-id pub-id-type="pmid">29426859</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nazzi</surname> <given-names>T.</given-names></name> <name><surname>Bertoncini</surname> <given-names>J.</given-names></name> <name><surname>Mehler</surname> <given-names>J.</given-names></name></person-group> (<year>1998</year>). <article-title>Language discrimination by newborns: toward an understanding of the role of rhythm</article-title>. <source>J. Exp. Psychol. Hum. Percept. Perform.</source> <volume>24</volume>, <fpage>756</fpage>&#x2013;<lpage>766</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0096-1523.24.3.756</pub-id>, PMID: <pub-id pub-id-type="pmid">9627414</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Ortiz Barajas</surname> <given-names>M. C.</given-names></name> <name><surname>Gervain</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). &#x201C;<article-title>The role of prenatal experience and basic auditory mechanisms in the development of language</article-title>&#x201D; in <source>Minnesota Symposia on child psychology</source>. eds. <person-group person-group-type="editor"><name><surname>Sera</surname> <given-names>M. D.</given-names></name> <name><surname>Koenig</surname> <given-names>M.</given-names></name></person-group>. <edition>1st</edition> ed (<publisher-loc>Hoboken, New Jersey, U.S</publisher-loc>: <publisher-name>Wiley</publisher-name>), <fpage>88</fpage>&#x2013;<lpage>112</lpage>.</citation></ref>
<ref id="ref36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ortiz Barajas</surname> <given-names>M. C.</given-names></name> <name><surname>Guevara</surname> <given-names>R.</given-names></name> <name><surname>Gervain</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>The origins and development of speech envelope tracking during the first months of life</article-title>. <source>Dev. Cogn. Neurosci.</source> <volume>48</volume>:<fpage>100915</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.dcn.2021.100915</pub-id>, PMID: <pub-id pub-id-type="pmid">33515956</pub-id></citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ortiz-Barajas</surname> <given-names>M. C.</given-names></name> <name><surname>Guevara</surname> <given-names>R.</given-names></name> <name><surname>Gervain</surname> <given-names>J.</given-names></name></person-group> (<year>2023</year>). <article-title>Neural oscillations and speech processing at birth</article-title>. <source>iScience</source> <volume>26</volume>:<fpage>108187</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.isci.2023.108187</pub-id>, PMID: <pub-id pub-id-type="pmid">37965146</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parise</surname> <given-names>E.</given-names></name> <name><surname>Csibra</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Neural responses to multimodal ostensive signals in 5-month-old infants</article-title>. <source>PLoS One</source> <volume>8</volume>:<fpage>e72360</fpage>. doi: <pub-id pub-id-type="doi">10.1371/journal.pone.0072360</pub-id>, PMID: <pub-id pub-id-type="pmid">23977289</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pefkou</surname> <given-names>M.</given-names></name> <name><surname>Arnal</surname> <given-names>L. H.</given-names></name> <name><surname>Fontolan</surname> <given-names>L.</given-names></name> <name><surname>Giraud</surname> <given-names>A.-L.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x03B8;-Band and &#x03B2;-band neural activity reflects independent syllable tracking and comprehension of time-compressed speech</article-title>. <source>J. Neurosci.</source> <volume>37</volume>, <fpage>7930</fpage>&#x2013;<lpage>7938</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2882-16.2017</pub-id>, PMID: <pub-id pub-id-type="pmid">28729443</pub-id></citation></ref>
<ref id="ref40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pujol</surname> <given-names>R.</given-names></name> <name><surname>Lavigne-rebillard</surname> <given-names>M.</given-names></name> <name><surname>Uziel</surname> <given-names>A.</given-names></name></person-group> (<year>1991</year>). <article-title>Development of the human cochlea</article-title>. <source>Acta Otolaryngol.</source> <volume>111</volume>, <fpage>7</fpage>&#x2013;<lpage>13</lpage>. doi: <pub-id pub-id-type="doi">10.3109/00016489109128023</pub-id></citation></ref>
<ref id="ref41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramus</surname> <given-names>F.</given-names></name></person-group> (<year>2003</year>). <article-title>Theories of developmental dyslexia: insights from a multiple case study of dyslexic adults</article-title>. <source>Brain</source> <volume>126</volume>, <fpage>841</fpage>&#x2013;<lpage>865</lpage>. doi: <pub-id pub-id-type="doi">10.1093/brain/awg076</pub-id>, PMID: <pub-id pub-id-type="pmid">12615643</pub-id></citation></ref>
<ref id="ref42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramus</surname> <given-names>F.</given-names></name> <name><surname>Ahissar</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Developmental dyslexia: the difficulties of interpreting poor performance, and the importance of normal performance</article-title>. <source>Cogn. Neuropsychol.</source> <volume>29</volume>, <fpage>104</fpage>&#x2013;<lpage>122</lpage>. doi: <pub-id pub-id-type="doi">10.1080/02643294.2012.677420</pub-id>, PMID: <pub-id pub-id-type="pmid">22559749</pub-id></citation></ref>
<ref id="ref43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramus</surname> <given-names>F.</given-names></name> <name><surname>Hauser</surname> <given-names>M. D.</given-names></name> <name><surname>Miller</surname> <given-names>C.</given-names></name> <name><surname>Morris</surname> <given-names>D.</given-names></name> <name><surname>Mehler</surname> <given-names>J.</given-names></name></person-group> (<year>2000</year>). <article-title>Language discrimination by human newborns and by cotton-top Tamarin monkeys</article-title>. <source>Science</source> <volume>288</volume>, <fpage>349</fpage>&#x2013;<lpage>351</lpage>. doi: <pub-id pub-id-type="doi">10.1126/science.288.5464.349</pub-id>, PMID: <pub-id pub-id-type="pmid">10764650</pub-id></citation></ref>
<ref id="ref44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ramus</surname> <given-names>F.</given-names></name> <name><surname>Nespor</surname> <given-names>M.</given-names></name> <name><surname>Mehler</surname> <given-names>J.</given-names></name></person-group> (<year>1999</year>). <article-title>Correlates of linguistic rhythm in the speech signal</article-title>. <source>Cognition</source> <volume>73</volume>, <fpage>265</fpage>&#x2013;<lpage>292</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0010-0277(99)00058-X</pub-id>, PMID: <pub-id pub-id-type="pmid">10585517</pub-id></citation></ref>
<ref id="ref45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rivera-Gaxiola</surname> <given-names>M.</given-names></name> <name><surname>Klarman</surname> <given-names>L.</given-names></name> <name><surname>Garcia-Sierra</surname> <given-names>A.</given-names></name> <name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2005</year>). <article-title>Neural patterns to speech and vocabulary growth in American infants</article-title>. <source>Neuroreport</source> <volume>16</volume>, <fpage>495</fpage>&#x2013;<lpage>498</lpage>. doi: <pub-id pub-id-type="doi">10.1097/00001756-200504040-00015</pub-id>, PMID: <pub-id pub-id-type="pmid">15770158</pub-id></citation></ref>
<ref id="ref46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosen</surname> <given-names>S.</given-names></name></person-group> (<year>1992</year>). <article-title>Temporal information in speech: acoustic, auditory and linguistic aspects</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci.</source> <volume>336</volume>, &#x2013;<lpage>373</lpage>, PMID: <pub-id pub-id-type="pmid">1354376</pub-id></citation></ref>
<ref id="ref47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sansavini</surname> <given-names>A.</given-names></name> <name><surname>Bertoncini</surname> <given-names>J.</given-names></name> <name><surname>Giovanelli</surname> <given-names>G.</given-names></name></person-group> (<year>1997</year>). <article-title>Newborns discriminate the rhythm of multisyllabic stressed words</article-title>. <source>Dev. Psychol.</source> <volume>33</volume>, <fpage>3</fpage>&#x2013;<lpage>11</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0012-1649.33.1.3</pub-id></citation></ref>
<ref id="ref48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schaadt</surname> <given-names>G.</given-names></name> <name><surname>M&#x00E4;nnel</surname> <given-names>C.</given-names></name> <name><surname>Van Der Meer</surname> <given-names>E.</given-names></name> <name><surname>Pannekamp</surname> <given-names>A.</given-names></name> <name><surname>Oberecker</surname> <given-names>R.</given-names></name> <name><surname>Friederici</surname> <given-names>A. D.</given-names></name></person-group> (<year>2015</year>). <article-title>Present and past: can writing abilities in school children be associated with their auditory discrimination capacities in infancy?</article-title> <source>Res. Dev. Disabil.</source> <volume>47</volume>, <fpage>318</fpage>&#x2013;<lpage>333</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ridd.2015.10.002</pub-id>, PMID: <pub-id pub-id-type="pmid">26479824</pub-id></citation></ref>
<ref id="ref49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schulte-K&#x00F6;rne</surname> <given-names>G.</given-names></name> <name><surname>Bruder</surname> <given-names>J.</given-names></name></person-group> (<year>2010</year>). <article-title>Clinical neurophysiology of visual and auditory processing in dyslexia: a review</article-title>. <source>Clin. Neurophysiol.</source> <volume>121</volume>, <fpage>1794</fpage>&#x2013;<lpage>1809</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clinph.2010.04.028</pub-id>, PMID: <pub-id pub-id-type="pmid">20570212</pub-id></citation></ref>
<ref id="ref50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shaywitz</surname> <given-names>S. E.</given-names></name></person-group> (<year>1998</year>). <article-title>Dyslexia</article-title>. <source>N. Engl. J. Med.</source> <volume>338</volume>, <fpage>307</fpage>&#x2013;<lpage>312</lpage>. doi: <pub-id pub-id-type="doi">10.1056/NEJM199801293380507</pub-id></citation></ref>
<ref id="ref52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>J.</given-names></name> <name><surname>Iverson</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <article-title>Listening effort during speech perception enhances auditory and lexical processing for non-native listeners and accents</article-title>. <source>Cognition</source> <volume>179</volume>, <fpage>163</fpage>&#x2013;<lpage>170</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cognition.2018.06.001</pub-id>, PMID: <pub-id pub-id-type="pmid">29957515</pub-id></citation></ref>
<ref id="ref53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stefanics</surname> <given-names>G.</given-names></name> <name><surname>H&#x00E1;den</surname> <given-names>G. P.</given-names></name> <name><surname>Sziller</surname> <given-names>I.</given-names></name> <name><surname>Bal&#x00E1;zs</surname> <given-names>L.</given-names></name> <name><surname>Beke</surname> <given-names>A.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name></person-group> (<year>2009</year>). <article-title>Newborn infants process pitch intervals</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>304</fpage>&#x2013;<lpage>308</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clinph.2008.11.020</pub-id>, PMID: <pub-id pub-id-type="pmid">19131275</pub-id></citation></ref>
<ref id="ref54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suppanen</surname> <given-names>E.</given-names></name> <name><surname>Winkler</surname> <given-names>I.</given-names></name> <name><surname>Kujala</surname> <given-names>T.</given-names></name> <name><surname>Ylinen</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>More efficient formation of longer-term representations for word forms at birth can be linked to better language skills at 2 years</article-title>. <source>Dev. Cogn. Neurosci.</source> <volume>55</volume>:<fpage>101113</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.dcn.2022.101113</pub-id>, PMID: <pub-id pub-id-type="pmid">35605476</pub-id></citation></ref>
<ref id="ref55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tomblin</surname> <given-names>J. B.</given-names></name> <name><surname>Records</surname> <given-names>N. L.</given-names></name> <name><surname>Buckwalter</surname> <given-names>P.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Smith</surname> <given-names>E.</given-names></name> <name><surname>O&#x2019;Brien</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>Prevalence of specific language impairment in kindergarten children</article-title>. <source>J. Speech Lang. Hear. Res.</source> <volume>40</volume>, <fpage>1245</fpage>&#x2013;<lpage>1260</lpage>. doi: <pub-id pub-id-type="doi">10.1044/jslhr.4006.1245</pub-id>, PMID: <pub-id pub-id-type="pmid">9430746</pub-id></citation></ref>
<ref id="ref56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>T&#x00F3;th</surname> <given-names>B.</given-names></name> <name><surname>Urb&#x00E1;n</surname> <given-names>G.</given-names></name> <name><surname>H&#x00E1;den</surname> <given-names>G. P.</given-names></name> <name><surname>M&#x00E1;rk</surname> <given-names>M.</given-names></name> <name><surname>T&#x00F6;r&#x00F6;k</surname> <given-names>M.</given-names></name> <name><surname>Stam</surname> <given-names>C. J.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Large-scale network organization of EEG functional connectivity in newborn infants</article-title>. <source>Hum. Brain Mapp.</source> <volume>38</volume>, <fpage>4019</fpage>&#x2013;<lpage>4033</lpage>. doi: <pub-id pub-id-type="doi">10.1002/hbm.23645</pub-id>, PMID: <pub-id pub-id-type="pmid">28488308</pub-id></citation></ref>
<ref id="ref57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsao</surname> <given-names>F.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Kuhl</surname> <given-names>P. K.</given-names></name></person-group> (<year>2004</year>). <article-title>Speech perception in infancy predicts language development in the second year of life: a longitudinal study</article-title>. <source>Child Dev.</source> <volume>75</volume>, <fpage>1067</fpage>&#x2013;<lpage>1084</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1467-8624.2004.00726.x</pub-id>, PMID: <pub-id pub-id-type="pmid">15260865</pub-id></citation></ref>
<ref id="ref58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Zuijen</surname> <given-names>T. L.</given-names></name> <name><surname>Plakas</surname> <given-names>A.</given-names></name> <name><surname>Maassen</surname> <given-names>B. A. M.</given-names></name> <name><surname>Maurits</surname> <given-names>N. M.</given-names></name> <name><surname>Van Der Leij</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). <article-title>Infant ERPs separate children at risk of dyslexia who become good readers from those who become poor readers</article-title>. <source>Dev. Sci.</source> <volume>16</volume>, <fpage>554</fpage>&#x2013;<lpage>563</lpage>. doi: <pub-id pub-id-type="doi">10.1111/desc.12049</pub-id>, PMID: <pub-id pub-id-type="pmid">23786473</pub-id></citation></ref>
<ref id="ref59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vander Ghinst</surname> <given-names>M.</given-names></name> <name><surname>Bourguignon</surname> <given-names>M.</given-names></name> <name><surname>Op De Beeck</surname> <given-names>M.</given-names></name> <name><surname>Wens</surname> <given-names>V.</given-names></name> <name><surname>Marty</surname> <given-names>B.</given-names></name> <name><surname>Hassid</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Left superior temporal gyrus is coupled to attended speech in a cocktail-party auditory scene</article-title>. <source>J. Neurosci.</source> <volume>36</volume>, <fpage>1596</fpage>&#x2013;<lpage>1606</lpage>. doi: <pub-id pub-id-type="doi">10.1523/JNEUROSCI.1730-15.2016</pub-id>, PMID: <pub-id pub-id-type="pmid">26843641</pub-id></citation></ref>
<ref id="ref60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Varnet</surname> <given-names>L.</given-names></name> <name><surname>Ortiz-Barajas</surname> <given-names>M. C.</given-names></name> <name><surname>Guevara Erra</surname> <given-names>R.</given-names></name> <name><surname>Gervain</surname> <given-names>J.</given-names></name> <name><surname>Lorenzi</surname> <given-names>C.</given-names></name></person-group> (<year>2017</year>). <article-title>A cross-linguistic study of speech modulation spectra</article-title>. <source>J. Acoust. Soc. Am.</source> <volume>142</volume>, <fpage>1976</fpage>&#x2013;<lpage>1989</lpage>. doi: <pub-id pub-id-type="doi">10.1121/1.5006179</pub-id>, PMID: <pub-id pub-id-type="pmid">29092595</pub-id></citation></ref>
<ref id="ref61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vouloumanos</surname> <given-names>A.</given-names></name> <name><surname>Werker</surname> <given-names>J. F.</given-names></name></person-group> (<year>2007</year>). <article-title>Listening to language at birth: evidence for a bias for speech in neonates</article-title>. <source>Dev. Sci.</source> <volume>10</volume>, <fpage>159</fpage>&#x2013;<lpage>164</lpage>. doi: <pub-id pub-id-type="doi">10.1111/j.1467-7687.2007.00549.x</pub-id></citation></ref>
<ref id="ref62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Zou</surname> <given-names>J.</given-names></name> <name><surname>Ding</surname> <given-names>N.</given-names></name></person-group> (<year>2023</year>). <article-title>Acoustic correlates of the syllabic rhythm of speech: modulation spectrum or local features of the temporal envelope</article-title>. <source>Neurosci. Biobehav. Rev.</source> <volume>147</volume>:<fpage>105111</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neubiorev.2023.105111</pub-id>, PMID: <pub-id pub-id-type="pmid">36822385</pub-id></citation></ref>
<ref id="ref63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zoefel</surname> <given-names>B.</given-names></name> <name><surname>VanRullen</surname> <given-names>R.</given-names></name></person-group> (<year>2016</year>). <article-title>EEG oscillations entrain their phase to high-level features of speech sound</article-title>. <source>NeuroImage</source> <volume>124</volume>, <fpage>16</fpage>&#x2013;<lpage>23</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.08.054</pub-id>, PMID: <pub-id pub-id-type="pmid">26341026</pub-id></citation></ref>
</ref-list>
</back>
</article>