Human Functional Asymmetries for Social Stimuli: Comparison
Please note this is a comparison between Version 1 by Daniele Marzoli and Version 2 by Lindsay Dong.

Although the population-level preference for the use of the right hand is the clearest example of behavioral lateralization, it represents only the best-known instance of a variety of functional asymmetries observable in humans. An interesting observation is that many of such asymmetries emerge during the processing of social stimuli, as often occurs in the case of human bodies, faces and voices.

  • functional lateralization
  • sensory asymmetries
  • motor asymmetries
  • right-handedness
  • left-face bias
  • left-cradling bias
  • right ear advantage

1. Introduction

In humans, the population-level preference for the use of the right hand (around 90% of individuals being right-handed; e.g., see [1][2][1,2]) represents the clearest example of behavioral lateralization. However, it is only the best-known instance of a variety of functional asymmetries reported in humans, such as pseudoneglect [3], the right ear advantage (REA [4]), the left-face bias (LFB [5]), asymmetries in social touch [6], turning behavior [7] and similar. It is noteworthy that many of such asymmetries are observed during the processing of social stimuli, and in particular human bodies, faces and voices.

2. Auditory Asymmetries

Historically, the dichotic listening paradigm turned out to be the first procedure to disclose asymmetries in the perception of social stimuli. It is 60 years since Doreen Kimura discovered the existence of a REA when different linguistic stimuli are presented simultaneously in the two ears [4][8][4,10]. This presentation mode—the so-called dichotic listening (DL)—was initially proposed by Broadbent [9][11] to study attention. However, it is only with Kimura’s discovery of a REA for speech sounds that such a paradigm was applied for the first time to neuropsychology. The original DL studies, consisting in the presentation of series of three, four or five pairs of digits to be reported later, had the limit of producing an effect of order and an involvement of working memory [10][11][12,13]. Consequently, the consonant–vowel (CV) syllable paradigm [12][14] was introduced, in which the stimuli typically consist of combinations of CV syllable pairs composed of the six stop consonants /b/, /d/, /g/, /k/, /p/, /t/ and the vowel /a/ (e.g., /ba/-/pa/) recorded as natural voices. In each trial, the two syllables of a pair are presented simultaneously, one in each ear, and participants have to identify and report the stimulus perceived first or best. Typically, they indicate more stimuli presented to the right than to the left ear (namely, the REA; Figure 1a). A REA is also observed in the discrimination of sound duration for CV syllables [13][15]. The mechanism at the basis of this effect can be accounted for by the structural, or neuroanatomical, model suggested by Kimura [14][15][16,17]. This model states that the REA is a consequence of the organization of cerebral auditory pathways, in which the contralateral pathway predominates over the ipsilateral one, in association with the specialization of the left temporal lobe for speech processing. It follows that the presentation of an auditory stimulus in one ear activates the contralateral auditory cortex more than the ipsilateral one [14][16][17][16,18,19], and that verbal stimuli presented to the right ear overcome those presented to the left ear. The input from the left ear can be transferred across the corpus callosum from the contralateral auditory cortex to reach the ipsilateral one [18][20], but such a transfer would cause a delay and attenuation of speech information. Besides the structural model, an attentional model has been proposed [19][20][21,22], according to which the perceptual asymmetry would be due to the dynamic imbalance in hemispheric activation, the left hemisphere being more activated than the right one by verbal inputs [21][22][23,24]. However, both models emphasize the left-hemispheric specialization for verbal stimuli.
Figure 1. Examples of verbal (a) and emotional (b) dichotic listening (DL). Participants tend to report the stimulus presented to the right and left ear, respectively.
The magnitude of the REA may vary among different populations, but it is observed from childhood [23][24][36,37] to old age [25][38], in males and females [24][26][37,39] and in right- and left-handers [27][28][40,41]. In addition, the REA for verbal material is observed across different languages in bilingual individuals [29][30][31][42,43,44], further confirming that this bias is related to the left-hemispheric specialization for language.
Further evidence of a right ear preference for linguistic sounds came from a dichotic speech illusion paradigm, in which a white noise could be presented alone or simultaneously with a vowel in one of the two ears: a right ear preference was found both when the verbal stimulus was absent and when it was present, extending the REA for verbal processing from the perceptual to the illusory domain [32][45]. The presence of a REA was also observed in paradigms involving imagery, and specifically in studies in which participants were invited to imagine hearing a voice in one ear only [33][34][35][46,47,48]. Interestingly, a REA seems to emerge also in ecological conditions, with listening individuals orienting their head so as to offer their right ear to a speaking individual during verbal exchanges in a noisy environment [36][49].

3. Visual Asymmetries

The second paradigm that revealed asymmetries in the perception of social stimuli is that of chimeric faces [37][38][66,67]. Levy et al. [38][67] photographed actors in smiling and neutral poses, cut down the photographs along their midsagittal axis and finally juxtaposed an emotional hemiface to a neutral hemiface of the same actor. Each chimeric face obtained in this way was presented together with its mirror image, one stimulus above the other, and the observers were required to select the face which looked happier in the pair. The scautholars found that participants judged as more expressive the chimeras in which the emotional half was on the left side of the face from the observer’s point of view (and thus directly projected to the right hemisphere). The scholaauthors also reported that this left visual field (LVF) advantage was stronger in right-handers than in left-handers, revealing that handedness plays a role in hemispheric asymmetries for faces, as confirmed by following studies (e.g., [39][68]). The main advantage of this paradigm is that of being a free-viewing presentation task, so that the printed stimuli can be observed for all the required time without affecting the LVF advantage. The chimeric face paradigm became a milestone in the field of hemispheric asymmetries for face processing [40][69] so that it was soon transformed into a computerized task, in which the presentation time of each stimulus is easily controllable, allowing for many experimental manipulations. For instance, the two chimeras can be presented either simultaneously, side by side [41][70], or one after the other, in the center of the screen [42][71]; response times can also be collected, further confirming the LVF advantage [43][72]. Facial features other than emotional expressions have been manipulated in the chimeric face paradigm, such as gender (female/male [44][73]), age (younger/older [5]) and ethnicity (e.g., Caucasian/Asian [45][74]). For instance, Chiang et al. [44][73] used a free viewing chimeric face paradigm and showed that the LVF advantage emerges by 6 years of age and reaches a plateau at about 10 years of age, as regards both emotions and gender. Burt and Perrett [5] extended the evidence of a LVF bias in adults to facial age and attractiveness.
Despite the great importance of the chimeric face paradigm in the research on hemispheric asymmetries for faces, other paradigms can be exploited to the same aim. Among these, the divided visual field (DVF) paradigm is based on the same neural assumptions as the chimeric face task, namely the contralateral projections of the human visual system [46][76]. In this paradigm, a stimulus is flashed in either the LVF or the right visual field (RVF) for less than 150 ms, which is about the minimum time needed to make a saccadic movement. In this way, a stimulus presented in the LVF or in the RVF is supposed to be directly projected to the right or left hemisphere, respectively (e.g., [47][77]). By means of such an experimental manipulation, the ability of one hemisphere in processing a specific stimulus can be directly compared with that of the opposite hemisphere, allowing researchers to further confirm the LVF advantage for faces [48][78]. The same paradigm has also been exploited to investigate another hemispheric imbalance, namely that for positive vs. negative emotional valence: for instance, in an electroencephalography study [49][79], angry (negative valence) and happy (positive valence) faces were presented either unilaterally (LVF or RVF) or bilaterally (one in the LVF and the other in the RVF, simultaneously). Behavioral results supported the so-called valence hypothesis [50][80], according to which the right and left hemispheres are specialized for negative and positive emotions, respectively, but the event-related potentials (ERPs) confirmed a right-hemispheric dominance for all emotional stimuli (see also [51][52][81,82]), as assumed by the right hemisphere hypothesis for emotional stimuli [53][54][83,84]. This unexpected evidence parallels the contrasting results found in previous research on hemispheric asymmetries in emotion processing, both theories receiving support from a number of studies (e.g., see [55][85]).

4. Perceptual and Attentional Asymmetries for Human Bodies

In more recent years, the DVF paradigm was also introduced in the study of human body parts, and in particular hands. Specifically, the lateralized presentation of bodies and body parts has been suggested as a way to study hemispheric asymmetries in motor representations [56][57][58][105,106,107]. For instance, it has been shown that participants respond faster when left and right hand stimuli are presented to the ipsilateral hemifield/contralateral hemisphere than when they are presented to the contralateral hemifield/ipsilateral hemisphere [56][105]. Moreover, Parsons et al. [58][107] found that callosotomy patients were faster and more accurate in judging the laterality of both left and right hand stimuli when they were presented to the ipsilateral hemifield/contralateral hemisphere than when they were presented to the contralateral hemifield/ipsilateral hemisphere (similar results were observed in healthy controls). In agreement with such findings, de Lussanet et al. [57][106] suggested that each hemisphere contains better visuo-motor representations for the contralateral body side than for the ipsilateral body side. Specifically, these scautholars showed that—compared with leftward-facing point-light walkers (PLWs)—rightward-facing PLWs were recognized better in the RVF, whereas—compared with rightward-facing PLWs—leftward-facing PLWs were recognized better in the LVF. In other words, compared with PLWs facing toward the point of gaze, those facing away from the point of gaze appeared more vivid. Such a lateralized facing effect was explained by de Lussanet et al. [57][106] by proposing that the visual perception of lateralized body stimuli is facilitated when the corresponding visual and body representations are located in the same hemisphere (given the contralateral organization of both the visual and motor-somatosensory systems). Actually, this is true when a PLW faces away from the observer’s fixation point, so that a lateralized embodiment of the observed body is fostered because the hemibody seen in the foreground is processed by the sensory-motor cortex located in the same side as the visual cortex processing the stimulus. It should be noticed that asymmetries in the perception of human bodies or body parts have also been reported in studies that do not resort to the DVF paradigm. Specifically, various studies investigating the perception of sport actions showed that the result of right limb actions is anticipated better than that of left limb actions [59][60][61][62][63][64][65][112,113,114,115,116,117,118]. As suggested by Hagemann [59][112] and Loffing et al. [61][114] (see also [60][62][63][65][66][113,115,116,118,119]), the ability to discriminate actions performed with the left hand is less developed than that to discriminate actions performed with the right hand. This is consistent with the advantage that left-handers and left-footers exhibit in several interactive sports [66][67][68][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136].

5. Asymmetries in Social Touch

Many other examples of behavioral asymmetries can be found during social interactions among humans, and are mainly observed in complex motor activities such as embracing, kissing and infant-holding, wherein the motor behavior shared reciprocally by two persons entails necessarily a sensory counterpart, social touch [6][84][6,142]. Relatively few studies have systematically investigated the first two instances of interactive social touch, showing a substantial rightward asymmetry for both embracing [85][86][143,144] and kissing [87][145], with the latter finding being considered as more controversial (e.g., see [88][146]). As regards infant-holding, the left-cradling bias (LCB: the tendency to hold infants predominantly using the left rather than the right arm [89][147]; Figure 24) has received much more scholarly attention over the last 60 years. Although this lateralized behavior, differeferntly from those reviewed above, refers to a motor rather than perceptual asymmetry, it nonetheless entails dealing with a human social stimulus (the infant) and seems to be related as well to perceptual asymmetries for social/emotional stimuli. Accordingly, there iswe argue that a guiding thread exists between the aforementioned LVF advantage for faces, the higher social salience of infant facial features found in women than in men [90][91][92][148,149,150], and the left-sided infant positioning during cradling interactions being shown to a greater extent by women than by men [93][151]. First of all, it should be noticed that a fairly robust LCB has been shown—regardless of assessment methodologies—both in left-handed women (and men, although to a lesser degree [94][95][152,153]) and in a mother affected by situs inversus with dextrocardia (i.e., a condition in which the heart is atypically placed in the right rather than the left side of the chest [96][154]). Therefore, the two first explanations proposed, namely the “handedness” (i.e., cradling infants with the non-dominant hand would free the dominant arm for other tasks [97][155]) and “heartbeat” (i.e., cradling infants on the left side would enhance the soothing effect of the mother’s heartbeat sound [89][147]) hypotheses cannot be accepted as reliable accounts of the LCB. On the contrary, it is now believed that the LCB is due to a population-level right-hemispheric dominance for socio-emotional processing, as suggested by several studies carried out in this particular field over the last three decades (e.g., [98][156]). For example, Harris et al. [99][157] used the chimeric face paradigm in order to reveal the relationship between participants’ right-hemispheric specialization for processing facial emotion and their lateral cradling preference, as assessed by means of an imagination task. These scautholars found that participants who imagined holding the infant on the left side showed a stronger LVF advantage (i.e., judged as more expressive the chimeras in which the emotional half was on the left side of the face) compared with participants who imagined holding the infant on the right side. Bourne and Todd [100][158] confirmed this finding using the chimeric face paradigm as well and a life-like doll to assess participants’ cradling lateral preferences. Consistent findings were reported by Vauclair and Donnot [101][159], who used a similar methodology (chimeric face paradigm and doll cradling task), although only in women. /media/item_content/202208/63031210dbb7csymmetry-14-01096-g004.png 
Figure 24.
Example of left-cradling bias (LCB).
ScholarVision Creations