This entry introduces an integrated model of reading that situates the Lexical Quality Hypothesis (LQH) within the Reading Systems Framework (RSF). The LQH posits that skilled reading depends on high-quality lexical representations—precise and flexible mappings of orthographic, phonological, morpho-syntactic, and semantic features—stored in the mental lexicon. These representations facilitate automatic word identification, accurate meaning retrieval, and efficient word-to-text integration (WTI), forming the foundation of text comprehension. Extending this micro-level perspective, the RSF positions lexical quality (LQ) within a macro-level cognitive architecture where the lexicon bridges word identification and reading comprehension systems. The RSF integrates multiple knowledge systems (linguistic, orthographic, and general world knowledge) with higher-order processes (sentence parsing, inference generation, comprehension monitoring, and situation model construction), emphasizing the bidirectional interactions between lower-level lexical knowledge and higher-order text comprehension. Central to this model is WTI, a dynamic mechanism through which lexical representations are incrementally incorporated into a coherent mental model of the text. This integrated model carries important implications for theory refinement, empirical investigation, and evidence-based instructional practices.
Reading literacy, commonly defined as the ability to read and comprehend written texts, is widely recognized as a cornerstone of human development, shaping one’s academic achievement, career trajectories, and lifelong outcomes
[1]. Pervasive low literacy exacerbates existing socioeconomic disparities, leading to far-reaching social and economic consequences. Despite pressing educational needs and extensive intervention efforts, recent assessment data continue to reveal concerning trends. In the United States, results from the National Assessment of Educational Progress (NAEP) indicate that nearly 40% of fourth-grade and 33% of eighth-grade students fail to meet the basic reading benchmarks—the highest failing rates recorded to date
[2]. Similarly, the Programme for International Student Assessment (PISA) reports that approximately 25% of 15-year-olds in Organization for Economic Cooperation and Development (OECD) countries exhibit low reading proficiency
[3]. These alarming statistics underscore the importance of advancing our understanding of reading literacy to clarify why many learners struggle to achieve reading proficiency and to inform effective educational practices for students at risk of reading difficulties.
Within the construct of reading literacy, reading comprehension stands out as a central component, as the ultimate goal of learning to read is to construct meaning from written texts
[4]. Reading comprehension goes beyond simple decoding, involving multiple linguistic and cognitive resources. To achieve this, readers must coordinate information across word, sentence, and text levels to build coherent mental representations. Notably, a growing body of evidence suggests the dissociation between word decoding and reading comprehension. For instance, some readers identified as “poor comprehenders” can recognize printed words yet still struggle to derive meaning from texts
[4][5][6][4,5,6]. This phenomenon indicates that word identification alone is insufficient to achieve successful reading comprehension. Given the growing prevalence of reading difficulties, research on reading literacy has expanded. Perspectives that focus primarily on word decoding exhibit clear limitations. While decoding skills undeniably contribute to reading comprehension, these perspectives pay less attention to the engagement of higher-order cognitive mechanisms in text comprehension. A more comprehensive theoretical perspective is needed—one that integrates both lower- and higher-order processes. At the lower level, for instance, deficits in phonological awareness can undermine the accuracy and fluency of word decoding
[7]. Even when visual word recognition remains intact, reading comprehension may still be constrained if higher-level processes are disrupted. Importantly, this does not diminish the foundational role of decoding in reading, as high-order comprehension processes largely rely on the quality of lexical representations. High-quality lexical representations provide a fundamental basis for text comprehension and contribute to individual differences in comprehension performance. Skilled readers can establish form–meaning mappings, move beyond local word and sentence boundaries, and integrate information into coherent representations of the text. In contrast, less-skilled readers often struggle to coordinate processes across these levels, thus posing challenges to reading comprehension. In this sense, reading comprehension depends on the quality of lexical representations, as well as the involvement of higher-order cognitive operations throughout the progression from word-level decoding to text-level comprehension. Accordingly, fine-grained investigations into lexical representations and their underlying mechanisms are essential for understanding how successful text-level comprehension is achieved.
Several theoretical frameworks have been proposed to account for the mechanisms underlying reading comprehension. The Simple View of Reading (SVR)
[8][9][10][8,9,10] posits that reading comprehension is the product of decoding and oral language comprehension, whereas the Construction–Integration model (C-I)
[11][12][13][14][11,12,13,14] frames comprehension as an iterative process in which meaning is progressively constructed and integrated. These theories have laid a solid foundation for reading research and also reveal significant limitations. The SVR merely provides a coarse description of the interplay between decoding and language comprehension, paying little attention to the specific cognitive mechanisms that drive these processes. Conversely, although the C-I model illustrates the cognitive operations involved in text comprehension, it neglects the contribution of lexical quality (LQ) and leaves unspecified the bidirectional relations between lower- and higher-level processes. As a result, these models fail to explain how linguistic information is integrated across different levels and overlook the detailed cognitive processes essential for constructing meaning from word to text. More broadly, existing models tend to emphasize the end product of comprehension (i.e., the coherent mental representations) while giving less consideration to the real-time cognitive processes through which such representations are generated. To address these theoretical gaps, this entry paper introduces an integrated framework that situates the Lexical Quality Hypothesis (LQH)
[15] within the Reading Systems Framework (RSF)
[16]. The LQH highlights the contribution of high-quality lexical representations—precise and flexible mappings of orthographic, phonological, morpho-syntactic, and semantic properties, along with their bindings. The RSF, on the other hand, positions these representations into multiple knowledge systems through higher-order processes such as sentence parsing, inference generation, comprehension monitoring, and situation model construction. This framework provides a more specific explanation of how word- and text-level mechanisms coordinate to enable successful reading comprehension and identifies the essential cognitive mechanism—Word-to-Text Integration (WTI)—that facilitates this process.