Auditory Processing in Musicians: History
Please note this is an old version of this entry, which may differ significantly from the current revision.

Μusicians are reported to have enhanced auditory processing. Auditory processing elements evaluated were speech recognition in babble, rhythmic advantage in speech recognition, short-term working memory, temporal resolution, and frequency discrimination threshold detection.

  • hearing
  • auditory processing
  • cognition
  • music

1. Background

Current research provides evidence of enhanced auditory processing in musicians, compared to non-musicians. Capitalizing on this neuroplasticity-based improvement may lead to more focused auditory training for individuals with Auditory Processing Disorder (APD) with the aim of better results and faster rehabilitation. Neuroplasticity, in this case, is the nervous system adaptation resulting from an active response to auditory stimuli. It involves connectivity changes for better performance, especially in related tasks [1]. Hearing is a prerequisite for communication, work, and learning for the average person as well as an essential sense for every musician. Hearing being evaluated by the gold standard pure-tone audiometry may be missing aspects of hearing that are important for everyday life [2]. An audiological evaluation may include speech audiometry as well as tympanometry, stapedial reflexes, otoacoustic emissions, and auditory brainstem responses depending on symptoms and the medical history of a given patient. Communication through the auditory modality needs intact temporal processing, speech in noise perception, working memory, and frequency discrimination [3,4]. Auditory processing happens at the level of the central auditory nervous system. Hearing (i.e., hearing sensitivity and auditory processing) contributes to the formation of cognition, and cognition contributes to hearing [4,5]. The superior auditory processing performance in musicians vs. non-musicians is explained by the enhanced usage and training of their hearing sense, emotion, and listening skills [6]. Musical training goes beyond auditory training to reading and comprehending complex symbols into motor activity [7]. Of interest, recent research shows that frequency precision is more correlated with musical sophistication than cognition [8].
The perception of music and speech is thought to be distinct, although sharing many acoustic and cognitive characteristics [9]. Pitch, timing, and timbre cues may be considered commonalities for auditory information transfer [10]. Memory and attention are required cognitive skills for both music and speech processing. Pitch is the psychoacoustic analogous of the frequency of the sound. Timing refers to specific turning points in the sound (for example, the beginning and the negation of the sound), and timbre is multidimensional and includes spectral and temporal features. Musicians’ superior auditory processing is attributed to enhanced accuracy of neural sound encoding [9,11,12,13] as well as better cognitive function [14,15]. The musical practice embraces the experience of specific sound ingredients as well as joint integration during the performance. Extracting meaning from a complex auditory scene may be a transferable skill to tracking a talker’s voice in a noisy environment [16].
Musicians are in an advantageous position in processing the pitch, timing, and timbre of music compared to non-musicians [17]. They demonstrate strengthened neural encoding of the timbre of their own instrument [18,19,20], but also show enhancements in processing speech [9,21,22,23,24,25] and non-verbal communication sounds [26]. Musical experience promotes a more accurate perception of meaningful sounds in communication contexts other than musical ones [9,12,23,27]. Music training is reported to change brain areas in a specific way that may be predicted by the performance requirements of the specific training instrument [28]. Musicians’ perceptual skills are influenced by the style of music played by them [29,30].
Auditory processing [31] consists of mechanisms that analyze, preserve, organize, modify, refine, and interpret information from the auditory signal. Skills that support these mechanisms are auditory discrimination, temporal and binaural processing, which are known as auditory processing elements. Temporal processing refers to auditory pattern recognition and temporal aspects of audition, divided into four subcomponents: temporal integration, temporal resolution/discrimination (e.g., gap detection), temporal ordering and temporal masking [32]. Sound localization and lateralization and auditory performance with challenging or degraded acoustic signals (including dichotic listening) [33] are included in binaural processing. Auditory discrimination involves the perception of acoustic stimuli in very rapid succession requiring the accuracy of information that is carried to the brain [34,35]. These processes may affect phoneme discrimination, speech in noise comprehension, duration discrimination, rhythm perception, and prosodic distinction [36,37]. Temporal resolution, defined as the shortest period over which the ear can discriminate two signals [38] may be linked to language acquisition and cognition in both children [39,40,41,42] and adults [43,44,45,46].
American Speech Language Hearing Association (ASHA) uses the term Central Auditory Processing Disorder (CAPD) to refer to deficits in neural processing, including bottom–up and top–down neural connectivity [47] of auditory information in the Central Auditory Nervous System (CANS) not as a consequence of cognition or higher order language [33]. Deficits in auditory information processing in the central nervous system (CNS) are demonstrated by poor performance in one or more elements of auditory processing [48]. (C)APD may coexist with, but is not derived from, dysfunction in other modalities. Despite the absence of any substantial audiometric findings, poor hearing and auditory comprehension are expressed in some cases in CAPD. Moreover, (C)APD can be associated with, co-exist or lead to difficulties in speech, language, attention, social, learning (e.g., spelling, reading), learning, and developmental functions [33,49]. In the international statistical classification of diseases and related health problems, 11th edition (ICD-11), auditory processing disorder (APD) is classified as AB5Y as a hearing impairment. (C)APD affects both children and adults, including the elderly [50], and it is linked to functional disorders beyond the cochlea [51,52]. According to WHO [49] prevalence estimates of APD in children range from 2–10% and can affect psychosocial development, academic achievement, social participation, and career opportunities.

2. Speech Perception in Noise

Speech perception in noise is at the core of auditory processing as the most easily explainable test with a real-life depiction. Temporal elements required to perceive speech may be similar to those needed for music with rhythm thought to stand as a bridge between speech and music [52]. Highly trained musicians have been reported in some studies to have superior performance on different measures of speech in noise [22,52,53,54] with this advantage not always being present [55,56,57].
The consolidating of the possible improved speech in noise perception of musicians may have rehabilitation implications for individuals with hearing impairment [55]. Research outcomes reveal that rhythm perception benefits are present at different levels of speech from words to sentences [58,59]. Percussionists were found to perform relatively better at the sentence in noise level compared to the words in noise one contrasted with vocalists while significantly outperforming non-musicians [52]. There is limited research evaluating speech perception in noise among musicians from different musical styles.

3. Temporal Resolution

Auditory temporal processing is the alteration of elements of duration within a specific time interval [50]. The ability of the auditory system to respond to rapid changes over time is a component of temporal processing called temporal resolution, linked to stopping consonants perception during a running speech [37,60].
Temporal processes are necessary for auditory processing and the perception of rhythm, pitch, duration, and separating foreground to background [3,36]. Chermak and Musiek [37] highlighted the role of temporal processing across a range of language processing skills, from phonemic to prosodic distinctions and ambiguity resolution. Temporal resolution underlies the discrimination of voiced from unvoiced stop consonants [61] and is clinically evaluated using Gaps-In-Noise [GIN] or Random Gap Detection Test [RGDT] [62]. Evaluating an individual’s ability to perceive a msec gap in noise or between two pure tones provides information on possible deficits in temporal resolution and can lead to a better shaping of rehabilitation [50]. Older adults generally are found to have poorer (longer) gap thresholds than younger adults [5].
Early exposure to frequent music training for years improves timing ability across sensory modalities [63]. Musicians present with better temporal resolution [64,65,66,67,68]. Musicians of different instruments and styles were found to have superior timing abilities compared to non-musicians [65,69]. Longer daily training in music leads to a better gap detection threshold [70]. Neuroplasticity as a result of music training results in enhanced temporal resolution in children that are comparable to adults [66]. To our knowledge, no research publications exist evaluating possible differences in temporal resolution across musicians from different musical styles.

4. Working Memory

Auditory and visual memory skills are enhanced in musicians and linked with early, frequent, and formal musical training [4,22,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86]. In rare cases, no difference is documented [55] between musicians and non-musicians. A meta-analysis reported a medium effect on short-term working memory with musicians being better. The advantage was large with tonal stimuli, moderate with verbal stimuli, and small or null with visuospatial stimuli [82]. This points to an auditory-specific working memory advantage rather than a more general one. Working memory improves due to auditory processing being enhanced through music education; hearing improves cognition.

5. Frequency Discrimination

During speech processing, the pitch has hyper-linguistic characteristics that provide information on emotion and intent [87] as well as linguistic characteristics. Musicians outperform non-musicians [13,22,65,69,84,88,89,90,91,92,93,94]. This advantage was hypothesized to be a contributing factor in better speech-in-noise perception found in musicians [22]. Classical musicians were reported to have superior frequency discrimination abilities when compared to those with contemporary music (e.g., jazz, modern) background [95]. To our knowledge, there is no study researching possible differences across different musical styles that include Byzantine music.

6. Different Music Styles and Instruments

The musicians’ groups selected for the present study differ in styles and music training. Byzantine music (BM), or Byzantine chant (BC), is the traditional ecclesiastical music of the Orthodox church. It is vocal music sung by one or more chanters [96] always having a monophonic character based on eight modes (“echos”) [97]. The chanters are usually male and there is no musical instrument involved apart from the human voice [98,99]. This is in contrast with Western classical music that is polyphonic, frequently including male and female voices in the presence of instruments. Percussionists are vastly trained in rhythmic skills and timing physical flexibility and in this research are experienced in both tuned and untuned percussions.
The ordinary tuning system for Western music is the 12 equal temperament tuning system which subdivides the octave interval into 12 tones (semitones) [99]. By contrast, the BC tuning system divides the octave into 72 equal subdivisions or “moria”, according to the Patriarchal Music Committee (PMC) [100]. In comparison to Western music, where the octave is based on 12 equal units (semitones), BM has each semitone corresponding to 6 moria [96]. The elementary tone (a minor second) consists of 100 logarithmically equal micro-intervals called cents; thus, the octave consists of (12 semitones × 100 cents) 1200 cents [101]. PMC’s musician experts indicate [100] that the less audible music interval is considered to be 1 m or 16.7 c, a critically smaller interval relevant to classical music. Likewise, Sunberg [102] argues that an interval of 20 cents (1.2 moria) is hardly heard by a listener. In BM, each micro-interval differs from its neighbors by at least 2 moria [99] and the frequency steps made in Byzantine music, compared to Western music, may vary from even 1 Hz in the bass voice range.

This entry is adapted from the peer-reviewed paper 10.3390/healthcare11142027

This entry is offline, you can click here to edit this entry!
Video Production Service