The Association between Music and Language in Children: Comparison
Please note this is a comparison between Version 2 by Dean Liu and Version 3 by Dean Liu.

Music and language are two complex systems that specifically characterize the human communication toolkit.

  • child
  • communication
  • infants
  • language development

1. Introduction

Due to their similarities, the music-language link has received particular consideration among infants and preschoolers [1]. Indeed, music and language are two hierarchical systems [2] in which smaller and separate units (e.g., phonemes and notes) are combined into higher-order structures by specific rules (e.g., words, sentences, and musical compositions). Both systems are characterized by sequences of melodic and rhythmic patterns, relying on melody and rhythm in music and prosody in language [1]. According to François & Schön [3], both systems are based on different perceptual and cognitive processes such as sound identification and categorization and memory storage and retrieval. Finally, both systems represent the most powerful forms of human communication, with the same stimuli being perceived as either language or music, depending on the listener’s interpretation [4].
The acquisition of both language and music has been argued to lie in a general principle: learners extract statistical patterns and regularities in the sound environment [5]. The implicit perceptual mechanism responsible for the incidental acquisition of structure in one’s environment has been termed statistical learning, and it is thought to be present before birth and throughout life. It has a crucial role in the development of many abilities, including learning the sound structure of the language and the music of the culture. Brown [6] suggested that both speech and musical phrases are “melodorhythmic” structures in which melody and rhythm are derived from three sources: (1) the acoustic properties of the fundamental units; (2) the sequential arrangement of such units in each phrase; and (3) expressive phrasing mechanisms that modulate the basic acoustic properties of the phrase for expressive emphasis and intention.
Some theoretical approaches support the hypothesis that language and music had a common precursor [6][7], known as protolanguage. The protolanguage has defined a language without syntax; it refers to either a holophrastic or arbitrarily concatenated language [6]. Specifically, according to Mithen [7], there are two main approaches to the evolution of language regarding the nature of protolanguage. The first approach is known as “compositional” and is associated with the theory of Bickerton and Jackendoff, suggesting that words came before grammar and that it is the evolution of syntax that differentiates the vocal communication system of Homo sapiens from all of those that went before. The second alternative approach, described by Wray and Arbib, suggests that pre-modern communication was constituted by ‘holistic’ phrases, each of which had a unique meaning and which could not be broken down into meaningful constituent parts [7]. In line with the holistic approach, Mithen [7] supports the idea that the phrases also make extensive use of variation in pitch, rhythm, and melody to communicate information, express emotion, and induce emotion in other individuals. As such, both language and music have a common origin in a communication system defined by Mithen [7] as “Hmmmmm” and by Brown [6] as “musilanguage”, that had the following characteristics: holistic, manipulative, multi-modal, musical, and mimetic. The philosopher Rousseau [8] argued that ancestral humans would have used a “protomusilanguage” and that people would have communicated by singing [9]. According to Brant et al. [4], language is a type of music where referential speech is involved in a musical structure. This aligns with Darwin’s hypothesis of similarities between language and music, in which human infants and young children were born with variable musical capabilities while language evolved from an early musical communicative system [10][11]. Interestingly, a form of communication in which music and language overlap seems to be infant-directed (ID) speech, also known as “motherese” [12][13]. Parents’ use of ID speech containing exaggerated prosodic properties is important in facilitating speech perception and language acquisition. This latter is characterized by a higher pitch, a slower tempo, the repetition of shorter sequences and sustained pauses, and amplified rhythmic and melodic patterns that are specific to one’s native language [12][14][15]. All these characteristics of ID speech support infants’ vocal development and modulate the interaction between parent and child [15][16][17][18]. Kotilahti et al. [19] demonstrated that newborns show largely overlapping brain activations between ID speech and instrumental music.
Growing evidence suggests that several aspects of language and music processing in humans involve similar change detection processes, from acoustic changes to the statistical structure of a sequence of sounds [2]. François and Schön [3] argued that the detection of a change, strongly linked to the predictive abilities of the auditory system, can be considered a biological process involved in the statistical learning of music and language. Concerning the neural processes underlying music and language, research on patients with cerebral damage supported the idea that these two systems shared resources. For example, Broca’s aphasia is characterized by failure to process musical syntax (for details, see [18]). Despite these similarities, some authors found opposite hemispheric dominance for speech over singing in the left temporal lobe, whereas singing over speech occurs in the right temporal lobe [20]. Furthermore, dissociations between music and language were found in patients with impaired processing of harmonic relations with preserved linguistic syntactic processing [1][21] or in patients with impaired grammatical processing of language with musical syntax intact [1][22]. Overall, these findings stressed that the dissociations between music and language depend on structures and functions, which make them two distinct and separate systems with unique features [23]. In this vein, Slevc and Patel [24] identified three critical differences between music and language regarding their meaning. First, the meaning evoked by music is far less specific than the meaning evoked by language. Specifically, units of a language denote specific semantic concepts, whereas units of music pick out semantic concepts at a much coarser grain. Second, linguistic semantics is compositional, unlike musical semantics. This implies that, whereas words can be combined in lawful ways in order to give rise to more complex meanings, units of music cannot be combined to convey propositions. Third, semantics exists for communicative reasons compared to music, which might better be conceived of as a form of expression rather than communication.
Beyond similarities or differences, other factors can affect the interactions between language and music. Indeed, they mutually rely on genetic, cognitive (e.g., intelligence), extra-cognitive (e.g., personality traits) and environmental factors (e.g., socioeconomic status), which provide high interindividual variability in their acquisition [25][26][27].
Nevertheless, there is agreement that music and language show a deep early entanglement as well as largely similar developmental trajectories in the early years of life.
Based on these premises, the presearchers aimnt review aims to shed light on the relationship between musical and linguistic aspects in infants and school-aged children by reviewing existing literature and investigating if music and language are associated in the early stages of life.

2. The Relationship between Music and Language Development

Music and language represent two critical communicative systems that characterize human beings [1][2][4]. They involve the combination of an elementary set of sounds ordered in time according to rules, allowing the perception and production of complex and unlimited utterances or musical phrases [3]. Research emphasizes that musical activities are predictive of language development. For this reason, children are exposed at an increasingly early age to musical activities at home and in more structured environments [28][29]. Since music and language are composed of various distinct components, it is crucial to disentangle and understand the specific elements of one domain that are associated with components of the other domain.
The selected articles showed that rhythm is the musical component that predicts infants’ language development, mainly in expressive and receptive language. Rhythm can be considered the structured arrangement of successive sound events over time, a primary parameter of musical structure [30][31]. A growing body of research in language development suggests that rhythm components influence early linguistic production in children [32][33][34][35][36]. Particularly, this evidence showed that infants use rhythmic cues to identify word-level units in speech [37]. Infants show responsiveness and sensibility to the rhythmic linguistic components and are able to distinguish among languages based on their rhythmic structures [4][38]. Several studies reported evidence about the rhythmic classes used to classify the language [39][40][41][42]. Specifically, literature agrees with a three-way classification of languages according to their predominant rhythmic structure [43][44]: Romance languages, such as Italian or Spanish, have a rhythm based on the syllable; most Germanic languages, such as English, Dutch, and German, have a rhythm based on the stress unit, while languages such as Japanese have a mora-based rhythm. Nazzi et al. [40] showed that at five months, infants may prefer any language with the same rhythmic structure as their native language (for example, German and English). On the contrary, 5-month-old infants discriminate pairs of languages from different rhythmic classes (e.g., English vs. Japanese; Italian vs. Japanese). This means that infants may not prefer their native language but rather the rhythmic characteristics of that language [4][45]. Moreover, infants’ early attention to rhythm (e.g., [42][46]) suggests that they are absorbing the sonic structure of their native language in the same way that researchers listen to music [4]. Moreover, it was demonstrated that at 12 months, the infants might already show a musical enculturation form for rhythm. Broadly in support of these findings, it has been suggested that rhythmic skills such as rhythm discrimination [47] and rhythm production may develop earlier than melodic abilities, which instead seem to emerge approximately every 3 and 4 years [1][48]. Interestingly, the rhythmic properties of music and language share some notable features: music and language are grouped into phrases marked by pauses and differences in tone height and duration of beats and syllables [49]. Given the many parallels, it has often been proposed that shared cognitive or perceptual mechanisms are active in the acquisition (e.g., [50]) and/or processing [51] of music and language. Rhythm production skills [52][53][54][55] and pitch [56] turned out to be predictive of expressive grammatical abilities in schoolchildren [52][57]. Overall, this evidence suggests that if such shared mechanisms are active, then music stimulation should reasonably facilitate language development.
During a child’s development, beyond rhythm, melody represents an additional musical component affecting language, which becomes more complex. Melody is described as patterns of pitched sounds unfolding over time, following cultural conventions and constraints [31]. Politimou et al. [1] have provided, for the first time in preschoolers or other developmental groups, evidence of an association between melody perception and language grammar. These results strongly suggest that similar auditory perceptual mechanisms may be responsible for both melody perception and language grammar, at least at this stage in development [1]. Thus, according to Politimou et al. [1], for preschoolers, rhythm and melody in different ways predict phonological awareness and language grammar, respectively.
The strong link between music and language through different stages of development, from infancy to schoolchildren, suggests that these two systems share common competencies and neural resources. According to Cohrdes et al. [58], there are similarities in the syntactic processing between musical and linguistic domains. Their results suggest that the competency to integrate smaller units into a higher-level syntactic system builds upon the abilities to discriminate as well as reproduce smaller units such as phonemes and sounds, words and tonal phrases, and syllables and rhythmic phrases. Generally, this indicates that competencies on higher levels might build upon different competencies on lower levels, and this characteristic is common for both domains. This finding is in line with the theories according to which musical and linguistic development support a step-by-step acquisition of skills in both modalities [1][58][59][60]. Such distinct abilities rely on common learning mechanisms that are partly dissociable from domain-specific aspects. Regarding shared neural resources, several behavioral and neuroscientific studies both in children and adults have supported the idea of shared online processing [22][61][62][63][64][65][66][67], such as the bilateral frontal-temporal network [4][49]. Specifically, Schön et al. [66] revealed bilateral involvement of the middle and superior temporal gyri and the inferior and middle frontal gyri while listening to spoken words, sung words, and “vocalize”, i.e., singing melodies without words [66]. Thus, it has been hypothesized that music can have a privileged status in the infant’s brain, enabling them to acquire and strengthen linguistic abilities [4]. Therefore, future research should identify the mechanisms underlying the music and language link along the developmental trajectory [1].
Another interesting aspect that emerges from selected studies is the primary role of ID speech on language development. Particularly, evidence highlights that ID speech predicts children’s language abilities in the later years, about 2 years [28], or in 3- and 4-year-old children for more complex language skills [1], such as the grammatical structure of language. Additionally, these studies suggest that higher levels of home engagement with singing, music making, and greater exposure to music can serve as scaffolding for acquiring verbal skills, greatly extending previous suggestions [1].
The current review provides evidence that music plays a critical role in language development in early life. The research has at least three implications worth mentioning. First, researchers suggests that infants showed sensitivity to the rhythmic components of language, and this musical component turned out to be predictive for the development of expressive language components such as grammatical ability and phonological awareness. Second, the results of the articles selected showed that melody ability seems to be associated with complex and refined linguistic abilities, such as the processing of emotional prosody in linguistic phrases. This result suggests that music and language show a deep entanglement as well as largely similar developmental trajectories since the early years of life.
Despite the findings, note that articles investigating the effect of musical interventions such as music training, music therapy, and music rehabilitation were excluded. Although previous research consistently addressed this topic, the decision is motivated by the complexity of the music interventions that involve too many additional variables to be accounted for, such as approaches and modalities of music, individual or group levels, professional figures involved, frequency and duration of the interventions, and settings (e.g., at home or school). Although researchers know that the decision to exclude articles focusing on music interventions might have limited the scope of the review, the results provide reasonable evidence that language development benefits from musical skills. Thus, music has a central role in the comprehension of language development in the early stages of life.
An important consideration that deserves to be mentioned is the poor representation of non-western languages. It introduces a bias towards these languages and limits the generalizability of the findings on the relationship between music and language in other languages. No studies have directly investigated the association between music and language according to the inclusion and exclusion criteria, and this represents a significant limitation of research in this specific field.
Future research should investigate the specific role of music interventions on language development, considering the music-language link, as well as other factors that can affect the interactions between language and music, such as cognitive skills, personality traits, and environmental dimensions. Accordingly, future investigations of associations between musical and linguistic abilities should account for inter-individual variability.

References

  1. Politimou, N.; Dalla Bella, S.; Farrugia, N.; Franco, F. Born to Speak and Sing: Musical Predictors of Language Development in Pre-schoolers. Front. Psychol. 2019, 24, 948.
  2. Kraus, N.; Slater, J. Music and language: Relations and disconnections. Handb. Clin. Neurol. 2015, 129, 207–222.
  3. François, C.; Schön, D. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: The role of musical practice. Hear. Res. 2014, 308, 122–128.
  4. Brandt, A.; Gebrian, M.; Slevc, L.R. Music and early language acquisition. Front. Psychol. 2012, 3, 327.
  5. Peretz, I.; Coltheart, M. Modularity of music processing. Nat. Neurosci. 2003, 6, 688–691.
  6. Brown, S. The ‘‘musilanguage” model of music evolution. In The Origins of Music; Wallin, N.L., Merker, B., Brown, S., Eds.; MIT Press: Cambridge, MA, USA; London, UK, 1999; pp. 271–300.
  7. Mithen, S. The Singing Neanderthals: The Origins of Music, Language, Mind and Body; Weidenfeld & Necholson: London, UK, 2005.
  8. Rousseau, J.-J. Essai sur L’origine des Langues; Éditions La Passe du Vent: Vénissieux, France, 1781.
  9. Richter, J.; Ostovar, R. “It Don’t Mean a Thing if It Ain’t Got that Swing”—An Alternative Concept for Understanding the Evolution of Dance and Music in Human Beings. Front. Hum. Neurosci. 2016, 10, 485.
  10. Darwin, C.R. The Descent of Man, and Selection in Relation to Sex; John Murray: London, UK, 1871.
  11. Masataka, N. Music, evolution and language. Dev. Sci. 2007, 10, 35–39.
  12. Fernald, A. Four-month-old infants prefer to listen to motherese. Infant Behav. Dev. 1985, 8, 181–195.
  13. Fernald, A.; Kuhl, P. Acoustic determinants of infant preference for motherese speech. Infant Behav. Dev. 1987, 10, 279–293.
  14. Trehub, S.E.; Trainor, L.J.; Unyk, A.M. Music and speech processing in the first year of life. In Advances in Child Development and Behavior; Reese, H.W., Ed.; Academic Press: New York, NY, USA, 1993; Volume 24, pp. 1–35.
  15. Papadimitriou, A.; Smyth, C.; Politimou, N.; Franco, F.; Stewart, L. The impact of the home musical environment on infants’ language development. Infant Behav. Dev. 2021, 65, 101651.
  16. Ruzza, B.; Rocca, F.; Boero, D.L.; Lenti, C. Investigating the musical qualities of early infant sounds. Ann. N. Y. Acad Sci. 2003, 999, 527–529.
  17. Welch, G.F. Singing and Vocal Development in The Child as Musician: A Handbook of Musical Development; McPherson, G., Ed.; Oxford University Press: Oxford, UK, 2006; pp. 311–329.
  18. Patel, A.D.; Iversen, J.R.; Wassenaar, M.; Hagoort, P. Musical syntactic processing in agrammatic Broca’s aphasia. Aphasiology 2008, 22, 776–789.
  19. Kotilahti, K.; Nissilä, I.; Näsi, T.; Lipiäinen, L.; Noponen, T.; Meriläinen, P.; Huotilainen, M.; Fellman, V. Hemodynamic responses to speech and music in newborn infants. Hum. Brain Mapp. 2010, 31, 595–603.
  20. Callan, D.E.; Tsytsarev, V.; Hanakawa, T.; Callan, A.M.; Katsuhara, M.; Fukuyama, H.; Turner, R. Song and speech: Brain regions involved with perception and covert production. Neuroimage 2006, 31, 1327–1342.
  21. Peretz, I.; Kolinsky, R.; Tramo, M.; Labrecque, R.; Hublet, C.; Demeurisse, G.; Belleville, S. Functional dissociations following bilateral lesions of auditory cortex. Brain 1994, 117, 1283–1301.
  22. Slevc, L.R.; Reitman, J.G.; Okada, B.M. Syntax in music and language: The role of cognitive control. In Proceedings of the 35th Annual Conference of the Cognitive Science Society, Berlin, Germany, 31 July–3 August 2013; Cognitive Science Society: Austin, TX, USA, 2013; pp. 3414–3419.
  23. Albouy, P.; Benjamin, L.; Morillon, B.; Zatorre, R.J. Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science 2020, 367, 1043–1047.
  24. Slevc, L.R.; Patel, A.D. Meaning in music and language: Three key differences: Comment on “Towards a neural basis of processing musical semantics” by Stefan Koelsch. Phys. Life Rev. 2011, 8, 110–111; discussion 125–128.
  25. Schellenberg, E.G. Music training and speech perception: A gene-environment interaction. Ann. N. Y. Acad. Sci. 2015, 1337, 170–177.
  26. Mendelsohn, A.L.; Klass, P. Early Language Exposure and Middle School Language and IQ: Implications for Primary Prevention. Pediatrics 2018, 142, e20182234.
  27. Fabbro, A.; Crescentini, C.; D’Antoni, F.; Fabbro, F. A pilot study on the relationships between language, personality and attachment styles: A linguistic analysis of descriptive speech. J. Gen. Psychol. 2019, 146, 283–298.
  28. Franco, F.; Suttora, C.; Spinelli, M.; Kozar, I.; Fasolo, M. Singing to infants matters: Early singing interactions affect musical preferences and facilitate vocabulary building. J. Child Lang. 2022, 49, 552–577.
  29. Herrera, L.; Hernández-Candelas, M.; Lorenzo, O.; Ropp, C. Music Training Influence on Cognitive and Language Development in 3 to 4 year-old Children. Rev. Psicodidáctica 2014, 19, 367–386.
  30. Dondena, C.; Riva, V.; Molteni, M.; Musacchia, G.; Cantiani, C. Impact of Early Rhythmic Training on Language Acquisition and Electrophysiological Functioning Underlying Auditory Processing: Feasibility and Preliminary Findings in Typically Developing Infants. Brain Sci. 2021, 11, 1546.
  31. Vuust, P.; Heggli, O.A.; Friston, K.J.; Kringelbach, M.L. Music in the brain. Nat. Rev. Neurosci. 2022, 23, 287–305.
  32. Jusczyk, P.W.; Cutler, A.; Redanz, N.J. Infants’ sensitivity to the predominant stress patterns of English words. Child. Dev. 1993, 64, 675–687.
  33. Morgan, J.L. Converging Measures Of Speech Tion Of Rhythm: Segmentation In Preverbal Infants. Infant Behav. Dev. 1994, 17, 389–403.
  34. Morgan, J.L. A Rhythmic Bias In Preverbal Speech Segmentation. J. Mem. Lang. 1996, 35, 666–688.
  35. Morgan, J.L.; Saffran, J.R. Emerging Integration Of Sequential And Suprasegmental Information in preverbal speech segmentation. Child Dev. 1995, 66, 911–936.
  36. Newsome, M.; Jusczyk, P.W. Do infants use stress as a cue in segmenting fluent speech? In Properception. Infant Behavior and Development, Proceedings of the 19th Boston University Conference on Language Development; MacLaughlin, D., McEwen, S., Eds.; Cascadilla Press: Boston, MA, USA, 1994.
  37. Echols, C.H.; Crowhurst, M.J.; Childers, J.B. The Perception of Rhythmic Units in Speech by Infants and Adults. J. Mem. Lang. 1997, 36, 202–225.
  38. Nazzi, T.; Bertoncini, J.; Mehler, J. Language discrimination by newborns: Toward an understanding of the role of rhythm. J. Exp. Psychol. Hum. Percept. Perform. 1998, 24, 756–766.
  39. Arvaniti, A. Acoustic features of Greek rhythmic structure. J. Phon. 1994, 22, 239–268.
  40. Nazzi, T. Du Rythme Dans L’acquisition et le Traitement de la Parole. Ph.D. Thesis, Ecole des Hautes Etudes en Sciences Sociales, Paris, France, 1997. Unpublished work.
  41. Shafer, V.L.; Shucard, D.W.; Jaeger, J.J. Electrophysiological indices of cerebral specialization and the role of prosody in language acquisition in three-month-old infants. Dev. Neuropsychol. 1999, 15, 73–109.
  42. Ramus, F.; Mehler, J. Language identication with suprasegmental cues: A study based on speech resynthesis. J. Acoust. Soc. Am 1999, 105, 512–521.
  43. Abercrombie, D. Elements of General Phonetics; University of Edinburgh Press: Edinburgh, UK, 1967.
  44. Pike, K. The Intonation of American English; Univ. of Michigan Press: Ann Arbor, MI, USA, 1945.
  45. Friederici, A.D.; Friedrich, M.; Christophe, A. Brain responses in 4-month-old infants are already language specific. Curr. Biol. 2007, 17, 1208–1211.
  46. Ramus, F.; Nespor, M.; Mehler, J. Correlates of linguistic rhythm in the speech signal. Cognition 1999, 73, 265–292.
  47. Anvari, S.H.; Trainor, L.J.; Woodside, J.; Levy, B.A. Relations among musical skills, phonological processing, and early reading ability in preschool children. J. Exp. Child Psychol. 2002, 83, 111–130.
  48. Tafuri, J.; Villa, D. Musical elements in the vocalisations of infants aged 2–8 months. Br. J. Music Educ. 2002, 19, 73–88.
  49. Patel, A.D. Language, music, syntax and the brain. Nat. Neurosci. 2003, 6, 674–681.
  50. Mcmullen, E.; Safran, J.R. Music and Language: A Developmental Comparison. Music Percept. 2004, 21, 289–311.
  51. Patel, A.D.; Iversen, J.R. The linguistic benefits of musical abilities. Trends Cogn. Sci. 2007, 11, 369–372.
  52. Swaminathan, S.; Schellenberg, E.G. Musical ability, music training, and language ability in childhood. J. Exp. Psychol. Learn. Mem. Cogn. 2020, 46, 2340–2348.
  53. Chern, A.; Tillmann, B.; Vaughan, C.; Gordon, R.L. New evidence of a rhythmic priming effect that enhances grammaticality judgments in children. J. Exp. Child Psychol. 2018, 173, 371–379.
  54. Canette, L.H.; Lalitte, P.; Bedoin, N.; Pineau, M.; Bigand, E.; Tillmann, B. Rhythmic and textural musical sequences differently influence syntax and semantic processing in children. J. Exp. Child Psychol. 2020, 191, 104711, yes.
  55. Lee, Y.S.; Ahn, S.; Holt, R.F.; Schellenberg, E.G. Rhythm and syntax processing in school-age children. Dev. Psychol. 2020, 56, 1632–1641.
  56. Dolscheid, S.; Çelik, S.; Erkan, H.; Küntay, A.; Majid, A. Children’s associations between space and pitch are differentially shaped by language. Dev. Sci. 2022, 31, e13341.
  57. Gordon, R.L.; Shivers, C.M.; Wieland, E.A.; Kotz, S.A.; Yoder, P.J.; Devin McAuley, J. Musical rhythm discrimination explains individual differences in grammar skills in children. Dev. Sci. 2015, 18, 635–644.
  58. Cohrdes, C.; Grolig, L.; Schroeder, S. Relating Language and Music Skills in Young Children: A First Approach to Systemize and Compare Distinct Competencies on Different Levels. Front. Psychol. 2016, 7, 1616.
  59. Welch, G.F.; Howard, D.M.; Rush, C. Real-time Visual Feedback in the Development of Vocal Pitch Accuracy in Singing. Psychol. Music 1989, 17, 146–157.
  60. Dowling, W.J. The development of music perception and cognition. In The Psychology of Music, 2nd ed.; Deutsch, D., Ed.; Academic Press: London, UK, 1999; pp. 603–625.
  61. Koelsch, S.; Gunter, T.C.; Cramon, D.Y.V.; Zysset, S.; Lohmann, G.; Friederici, A.D. Bach speaks: A cortical “language-network” serves the processing of music. Neuroimage 2002, 17, 956–966.
  62. Jentschke, S.; Koelsch, S.; Sallat, S.; Friederici, A.D. Children with specific language impairment also show impairment of music-syntactic processing. J. Cogn. Neurosci. 2008, 20, 1940–1951.
  63. Fedorenko, E.; Patel, A.; Casasanto, D.; Winawer, J.; Gibson, E. Structural integration in language and music: Evidence for a shared system. Mem. Cognit. 2009, 37, 1–9.
  64. Jentschke, S.; Koelsch, S. Musical training modulates the development of syntax processing in children. Neuroimage 2009, 47, 735–744.
  65. Sammler, D.; Koelsch, S.; Ball, T.; Brandt, A.; Elger, C.E.; Friederici, A.D.; Grigutsch, M.; Huppertz, H.J.; Knösche, T.R.; Wellmer, J.; et al. Overlap of musical and linguistic syntax processing: Intracranial ERP evidence. Ann. N. Y. Acad. Sci. 2009, 1169, 494–498.
  66. Schön, D.; Gordon, R.L.; Campagne, A.; Magne, C.; Astésano, C.; Anton, J.L.; Besson, M. Similar cerebral networks in language, music and song perception. Neuroimage 2010, 51, 450–461.
  67. Kunert, R.; Willems, R.M.; Casasanto, D.; Patel, A.D.; Hagoort, P. Music and language syntax interact in Broca’s area: An fMRI study. PLoS ONE 2015, 10, e0141069.
More
Video Production Service