Tracking Eye Movements as Window on Language Processing: Comparison
Please note this is a comparison between Version 1 by Marta Tagliani and Version 2 by Vicky Zhou.

This entry overviews the pioneering experimental studies exploiting eye movement data to investigate language processing in real time. After examining how vision and language were found to be closely related, herein focus the discussion on the evolution of eye-tracking methodologies to investigate children’s language development. To conclude, herein provide some insights about the use of eye-tracking technology for research purposes, focusing on data collection and data analysis

  • visual world
  • eye-tracking
  • language processing
  • language acquisition
Until the 1970s, experimental studies on linguistic competence and processing have exclusively relied on offline measures of comprehension. In classical psycholinguistic paradigms such as lexical decision [1] or sentence–picture verification tasks [2][3][2,3], participants are asked to evaluate the truthfulness of the linguistic input provided, either against pictures or their word knowledge. In these paradigms, sentence comprehension is assessed by measuring participants’ response latencies and accuracy in expressing metalinguistic evaluations after being presented with the linguistic stimulus. However, while response choices and reaction times are behavioral measures that provide information on linguistic comprehension, such tasks do not tap into real-time processing of spoken language, and, as a consequence, reveal less about the speaker’s efficiency and knowledge.
Another paradigm is the visual world paradigm, an experimental methodology which employs the recording of participants’ eye movements during listening tasks. Unlike long-established psycholinguistic paradigms, eye movement data provide exhaustive information on the time course of language comprehension as well as relevant insights on how visual and linguistic sources of information interact in real-time. In a typical visual world set-up, participants are instructed to listen to sentences carefully and look wherever they want on the screen or interact with objects or screen-based pictures (e.g., by moving them). The simplicity of such set-up makes the task execution extremely effortless, as it relies on the human tendency to look at relevant parts of the visual scenario as critical words are mentioned. In fact, participants are not asked to do anything different from what they do in their everyday life, when they automatically integrate information from visual or written and spoken sources of information (e.g., while listening to the news on TV). The unchallenging nature of visual world studies makes this experimental paradigm extremely suitable to investigate language comprehension in populations with language disorders as aphasia [4][5][4,5] or developmental dyslexia [6][7][8][9][10][6,7,8,9,10], as well as in infants and young children [11][12][13][11,12,13].
This entry offers a detailed overview of how the visual world paradigm can be used efficiently to assess linguistic comprehension in children. In Section 2.1, the entry will review the pioneering eye-tracking studies, which have led to the affirmation of the visual world paradigm in psycholinguistic research. In Section 2.2, the entry will illustrate the main experimental procedures typically used in the visual world paradigm with both adult and child participants. The remainder of the entry will be devoted to the discussion of the different eye-tracking methodologies exploiting the relation between language and vision to study online language processing by infants and children, namely the Preferential-Looking Paradigm (Section 3.1) and the Looking-While-Listening Task (Section 3.2). Specific limitations and advantages of these different tasks will also be discussed. In conclusion, the entry will give some details about the eye-tracking technology, focusing on data collection and data analysis, and discussing some methodological limitations (Section 3.3).
Video Production Service