Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1546 2024-01-16 10:00:58 |
2 Reference format revised. Meta information modification 1546 2024-01-17 02:05:30 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Klapproth, F.; Lippe, H.V.D. Gender Bias in Curriculum-Based Measurement. Encyclopedia. Available online: (accessed on 23 June 2024).
Klapproth F, Lippe HVD. Gender Bias in Curriculum-Based Measurement. Encyclopedia. Available at: Accessed June 23, 2024.
Klapproth, Florian, Holger Von Der Lippe. "Gender Bias in Curriculum-Based Measurement" Encyclopedia, (accessed June 23, 2024).
Klapproth, F., & Lippe, H.V.D. (2024, January 16). Gender Bias in Curriculum-Based Measurement. In Encyclopedia.
Klapproth, Florian and Holger Von Der Lippe. "Gender Bias in Curriculum-Based Measurement." Encyclopedia. Web. 16 January, 2024.
Gender Bias in Curriculum-Based Measurement

By immediately responding to achievement progress data, teachers can improve students’ performance by using curriculum-based measurement. However, teachers are prone to make biased judgments about the students providing the data.

curriculum-based measurement gender bias gender stereotypes

1. Introduction

Teachers are increasingly turning to curriculum-based measurement (CBM) as a tool for monitoring student development in fundamental academic domains like arithmetic, reading, spelling, and writing. In most cases, it comprises employing brief, routinely administered standardized tests to gauge pupils’ advancement toward a long-term objective [1]. However, the effectiveness of CBM for raising student achievement appears to be mixed [2][3][4][5], despite the fact that it provides teachers with a strong framework for making judgments based on evidence regarding whether students need help, instructions need to be revised, or teaching objectives need to be adjusted [6]. Research has shown that teachers have difficulty using progress data to inform and guide their instruction [7][8]. The interpretation of progress data is impacted by a number of factors, which is one reason why CBM alone does not result in better teaching [9][10][11][12]. These factors may be related to the lack of attention teachers devote to relevant aspects of the graph [13], may be connected with characteristics of the progress data itself or its presentation, or may belong to characteristics of the to-be-judged students. For example, refs. [9][11][12] found that high data variability results in relative overestimation of current trends. Ref. [14] could show that the presence of a trend line, which visually depicts the linear component of progress data points, reduces judgment errors. Moreover, ref. [11] demonstrated that pre-service teachers judged progress data of reading fluency obtained from girls more positively than the same data obtained from boys.

2. Gender Bias in Curriculum-Based Measurement 

2.1. Curriculum-Based Measurement

Curriculum-based measurement is a general term for assessment systems that track a student’s progress in learning within a particular academic subject. In order to determine whether students have achieved a learning goal or instead require extra support, it is necessary to frequently evaluate their abilities [15].
CBM entails the frequent administration of brief measures of performance in a chosen academic domain of interest (such as reading, writing, or mathematics). A graph showing the student’s learning trajectory over a set time period is typically used to depict the student’s achievement. The graph can be used by teachers to assess the efficacy of a lesson plan, a student’s mastery of a subject, or whether a student is expected to perform in accordance with pre-set learning objectives. CBM can be a useful tool for teachers to raise students’ performance by systematically responding to accomplishment data with instructional adjustments. However, when employing CBM, teachers frequently struggle to enhance their education [2][3][4][5][8]. Although CBM graphs are constructed to facilitate teachers’ understanding of their students’ progress, their comprehension appears to be challenging [8][16]. One probable explanation for teachers’ lack of adequate response to the presentation of progress data is their inability to read and understand data accurately [17][18]. Even using computer software designed to help teachers analyze graphs by presenting statistics like the graph’s linear trend does not produce an adequate grasp of the progress data. Moreover, teachers frequently do not use these statistics [16]. Instead, they rely on visual assessment of the data more frequently [14]. Visual inspection, on the other hand, is prone to inaccuracy [19][20], and as a result, teachers make errors when evaluating visible progress data. For instance, ref. [12] presented teachers with CBM graphs and assessed their ability to grasp information from them. They found that teachers were prone to ignore the relevant information and to focus on rather marginal details. Similar results were revealed by [13] who examined teachers’ eye movements when judging CBM graphs. The quality of data interpretation seems also to depend on the intensive support of teachers by researchers. Without such support, teachers are likely to use CBM data inconsistently and inappropriately [21]. According to [22], merely providing teachers with students’ data will not necessarily result in their using it, as long as they do not believe in the importance of these data.

2.2. Origins of CBM

CBM was invented in the 1970s by Deno and Mirkin [23]. It was developed within the field of special education with the aim of allowing teachers to formatively evaluate their instructional programs by successively using probes to test their students’ basic academic skills, so that they were able to describe the students’ growth in academic achievement. The invention of CBM was followed by a 5-year program of research conducted at the Institute for Research on Learning Disabilities at the University of Minnesota [24]. Since CBM offers a system for monitoring students’ attainment of academic goals and evaluating instructional programs, its use has been formalized among school districts in the United States. Today, U.S.-based norms and data management are available on internet platforms, e.g., (accessed on 10 December 2023).

2.3. Different Content Domains

CBM was developed initially to help teachers at the primary school level increase the achievement of students struggling to learn basic skills in reading, writing, and arithmetic [15]. Since many students struggle with reading [25], one of the first domains where CBM was applied was reading. Reading CBM often consists of oral reading fluency [6], where students read aloud from a passage for a limited time (e.g., 1 min). To do this, students need to use a variety of different literacy skills, for instance, decoding, vocabulary, and comprehension [26]. Teachers score reading CBM by first counting the total amount of words attempted in 1 min, then counting the total number of errors, and finally subtracting the total number of errors from the total number of words, yielding the words read correctly (WRC) score [6].
As with reading, math is a skill that is essential for success in life. In parallel to reading CBM, math CBM has been developed to assess computation e.g., [27], with the majority of research and development focused on the primary school level [28]. Math CBM is conducted by having students answer computational problems for a certain amount of time (e.g., 2 min; [29]). When scoring math CBM, usually the number of correct digits is used [6].

2.4. Biases in Interpreting Progress Data

When teachers use CBM data to judge student achievement, several causes of bias have been discovered. For instance, when progress data are highly variable, teachers find it challenging to predict the rate of progress accurately [9][12][30]. Ref. [30] could show that teachers tended to overestimate student progress when data variability was high. Similar results were obtained by [9][12]. They could show that pre-service teachers tended to overestimate the current trend. This result could be explained by the participants’ proclivity to identify trends in random patterns. Peaks in achievement in progress data with a high level of random variability may imply that those children will perform better than students with the same trend but a lower amount of random variability and hence lower peaks.

2.5. A Gender Bias in CBM

Girls typically outperform boys in reading competency across countries and languages e.g., [31][32][33][34][35]. In math, gender differences are also likely to occur. Boys continue to outperform girls in math, with a wider disparity among the highest achievers, despite gender gaps in job market involvement and educational attainment narrowing [36][37].
Despite the relative advantage of boys in math, several studies in different countries have shown that, on average and across domains, girls outperform boys e.g., [37]. Compared to boys, girls are more likely to display high-achieving developmental patterns e.g., [38].
Differences between boys and girls in achievement are usually reflected in differences in teachers’ assessment of their achievement [39]. However, gender-related differences in assessment might also arise from bias that is not based on achievement or skills. Boys’ lower reading proficiency levels and their relatively higher math successes are discussed as being partially the product of a bias due to teachers’ gender stereotypes, which hold that reading is more appropriate for girls than for boys e.g., [40], whereas math is better suited for boys than for girls [35][41]. Gender stereotypes among teachers conform to stereotypes about student motivation and working habits [42][43].

2.6. Gender Stereotypes as a Source of Gender Bias

Stereotypes can be defined as “shared […] beliefs about traits that are characteristic of members of a social category” [44] (p. 14). Thus, they are the result of categorizing individuals into groups based on supposed commonality. Stereotypes can serve as norms, affecting expectations and behavior toward members of a particular social group, and as schemas, enhancing social interactions with strangers [45]. These expectations are activated when a target is classified as belonging to a specific group [46][47].
According to dual process theories of social judgment e.g., [48], people’s evaluations of other people take place along a continuum of two simultaneous processes. On one end of the spectrum, judgments are, quick, effortless, automatic, and based on social categories (e.g., “girl”, “boy”, “immigrant”); on the other end of the spectrum, it is assumed that a slow, laborious, voluntarily initiated process will outweigh and enrich the automatic process by incorporating all pertinent information about the subject of the judgment.
If a person exhibits salient characteristics that are consistent with a certain stereotype, or if the judging person is unsure about the proper interpretation of the other person’s behavior, the use of stereotypical categories becomes more likely [49][50].
Gender stereotypes in particular cause female students to be seen as less talented than male students in all areas of science, whereas male students are considered inferior to female students in the domain of languages [51].


  1. Deno, S.L. Curriculum-based measurement: The emerging alternative. Except. Child. 1985, 52, 219–232.
  2. Ardoin, S.P.; Christ, T.J.; Morena, L.S.; Cormier, D.C.; Klingbeil, D.A. A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. J. Sch. Psychol. 2013, 51, 1–18.
  3. Christ, T.J.; Zopluoglu, C.; Long, J.D.; Monaghen, B.D. Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Except. Child. 2012, 78, 356–373.
  4. Espin, C.A.; van den Bosch, R.M.; van der Liende, M.; Rippe, R.C.A.; Beutick, M.; Langa, A.; Mol, S.E. A systematic review of CBM professional development materials: Are teachers receiving sufficient instruction in data-based decision-making? J. Learn. Disabil. 2021, 54, 256–268.
  5. Peters, M.T.; Förster, N.; Hebbecker, K.; Forthmann, B.; Souvignier, E. Effects of data-based decision-making on low-performing readers in general education classrooms: Cumulative evidence from six intervention studies. J. Learn. Disabil. 2021, 54, 334–348.
  6. Hosp, M.K.; Hosp, J.L.; Howell, K.W. The ABCs of CBM. A Practical Guide to Curriculum-Based Measurement; Guilford Press: New York, NY, USA, 2007.
  7. Raffe, C.P.; Loughland, T. “We’re not data analysts”: Teachers’ perspectives on factors impacting their use of student assessment data. Issues Educ. Res. 2021, 31, 224–240.
  8. Zeuch, N.; Förster, N.; Souvignier, E. Assessing teachers’ competencies to read and interpret graphs from learning progress assessment: Results from tests and interviews. Learn. Disabil. Res. Pract. 2017, 32, 61–70.
  9. Klapproth, F. Biased predictions of students’ future achievement: An experimental study on pre-service teachers’ interpretation of curriculum-based measurement graphs. Stud. Educ. Eval. 2018, 59, 67–75.
  10. Klapproth, F. Stereotype in der Lernverlaufsdiagnostik. In Stereotype in der Schule II; Glock, S., Ed.; Springer: Berlin, Germany, 2022; pp. 49–88.
  11. Klapproth, F.; Holzhüter, L.; Jungmann, T. Prediction of students’ reading outcomes in learning progress monitoring. Evidence for the effect of a gender bias. J. Educ. Res. Online 2022, 14, 16–38.
  12. Jungjohann, J.; Gebhardt, M.; Scheer, D. Understanding and improving teachers’ graph literacy for data-based decision-making via video intervention. Front. Educ. 2022, 7, 919152.
  13. Van den Bosch, R.M.; Espin, C.A.; Sikkema-de Jong, M.T.; Chung, S.; Boender, P.D.M.; Saab, N. Teachers‘ visual inspection of curriculum-based measurement progress graphs: An exploratory, descriptive eye-tracking study. Front. Educ. 2022, 7, 921319.
  14. Van Norman, E.R.; Nelson, P.M.; Shin, J.-E.; Christ, T.J. An evaluation of the effects of graphic aids in improving decision accuracy in a continuous treatment design. J. Behav. Educ. 2013, 22, 283–301.
  15. Deno, S.L. Developments in curriculum-based measurement. J. Spec. Educ. 2003, 37, 184–192.
  16. Espin, C.A.; Waymann, M.M.; Deno, S.L.; McMaster, K.L. Data-based decision making: Developing a method for capturing teachers’ understanding of CBM graphs. Learn. Disabil. Res. Pract. 2017, 32, 8–21.
  17. Van den Bosch, R.M.; Espin, C.A.; Chung, S.; Saab, N. Data-based decision making: Teachers’ comprehension of curriculum-based measurement progress-monitoring graphs. Learn. Disabil. Res. Pract. 2017, 32, 46–60.
  18. Van den Bosch, R.M.; Espin, C.A.; Pat-El, R.J.; Saab, N. Improving teachers’ comprehension of curriculum-based measurement progress monitoring graphs. J. Learn. Disabil. 2019, 52, 413–427.
  19. Wilbert, J.; Bosch, J.; Lüke, T. Validity and judgment bias in visual analysis of single-case data. Int. J. Res. Learn. Disabil. 2021, 5, 13–24.
  20. Klapproth, F. Mental models of growth. In Culture and Development in Japan and Germany; Helfrich, H., Zillekens, M., Hölter, E., Eds.; Daedalus: Münster, Germany, 2006; pp. 141–153.
  21. Gesel, S.A.; LeJeune, L.M.; Chow, J.C.; Sinclair, A.C.; Lemons, C.J. A meta-analysis of the impact of professional development on teachers’ knowledge, skill, and self-efficacy in data-based decision-making. J. Learn. Disabil. 2021, 54, 269–283.
  22. Lai, M.K.; Schildkamp, K. Inservice teacher professional learning: Use of assessment in data-based decision-making. In Handbook of Human and Social Conditions in Assessment; Brown, G.T.L., Harris, L.R., Eds.; Routledge: Oxfordshire, UK, 2016; pp. 77–94.
  23. Deno, S.L.; Mirkin, P. Data Based Program Modification: A Manual; Leadership Training Institute for Special Education: Minneapolis, MN, USA, 1977.
  24. Tindal, G. Curriculum-based measurement: A brief history of nearly everything from the 1970s to the present. ISRN Educ. 2013, 2013, 958530.
  25. Mullis, I.V.S.; von Davier, M.; Foy, P.; Fishbein, B.; Reynolds, K.A.; Wry, E. PIRLS 2021. International Results in Reading; Boston College: Chestnut Hill, MA, USA, 2023.
  26. Fuchs, L.S.; Fuchs, D.; Hosp, M.K. Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Sci. Stud. Read. 2001, 5, 239–256.
  27. Fuchs, L.S.; Fuchs, D.; Compton, D.L.; Bryant, J.D.; Hamlett, C.L.; Seethaler, P.M. Mathematics screening and progress monitoring at first grade: Implications for responsiveness to intervention. Except. Child. 2007, 73, 311–330.
  28. Nelson, G.; Kiss, A.J.; Codding, R.S.; McKevett, N.M.; Schmitt, J.F.; Park, S.; Romero, M.E.; Hwang, J. Review of curriculum-based measurement in mathematics: An update and extension of the literature. J. Sch. Psychol. 2023, 97, 1–42.
  29. Christ, T.J.; Scullin, S.; Tolbize, A.; Jiban, C.L. Implications of Recent Research: Curriculum-Based Measurement of Math Computation. Assess. Eff. Interv. 2008, 33, 198–205.
  30. Nelson, P.M.; Van Norman, E.R.; Christ, T.J. Visual analysis among novices: Training and trend lines as graphic aids. Contemp. Sch. Psychol. 2017, 21, 93–102.
  31. McElvany, N.; Lorenz, R.; Frey, A.; Goldhammer, F.; Schilcher, A.; Stubbe, T.C. IGLU 2021. Lesekompetenzen von Grundschulkindern im Internationalen Vergleich und im Trend Über 20 Jahre; Waxmann: Münster, Germany, 2023.
  32. Mullis, I.V.S.; Martin, M.O.; Foy, P.; Hooper, M. PIRLS 2016: International Results in Reading; TIMSS & PIRLS International Study Center; Lynch School of Education; Boston College International Association for the Evaluation of Educational Achievement (IEA): Chestnut Hill, IL, USA, 2017.
  33. Manu, M.; Torppa, M.; Vasalampi, K.; Lerkkanen, M.-K.; Poikkeus, A.-M.; Niemi, P. Reading development from kindergarten to age 18: The role of gender and parental education. Read. Res. Q. 2023, 58, 505–538.
  34. Meissel, K.; Meyer, F.; Yao, E.S.; Rubie-Davies, C.M. Subjectivity of teacher judgments: Exploring student characteristics that influence teacher judgments of student ability. Teach. Teach. Educ. 2017, 65, 48–60.
  35. Carlana, M. Implicit stereotypes: Evidence from teachers’ gender bias. Q. J. Econ. 2019, 134, 1163–1224.
  36. OECD. Are Boys and Girls Equally Prepared for Life? 2014. Available online: (accessed on 14 April 2023).
  37. Tian, L.; Li, X.; Chen, X.; Huebner, E.S. Gender-specific trajectories of academic achievement in Chinese elementary school students: Relations with life satisfaction trajectories and suicidal ideation trajectories. Learn. Instr. 2023, 85, 101751.
  38. Fu, R.; Chen, X.; Wang, L.; Yang, F. Developmental trajectories of academic achievement in Chinese children: Contributions of early social-behavioral functioning. J. Educ. Psychol. 2016, 108, 1001.
  39. Hoge, R.D.; Coladarci, T. Teacher-based judgments of academic achievement: A review of literature. Rev. Educ. Res. 1989, 59, 297–313.
  40. Lorenz, G.; Gentrup, S.; Kristen, C.; Stanat, P.; Kogan, I. Stereotype bei Lehrkräften? Eine Untersuchung systematisch verzerrter Lehrererwartungen. Kölner Z. Für Soziologie Und Sozialpsychologie 2016, 68, 89–111.
  41. Cvencek, D.; Kapur, M.; Meltzoff, A.N. Math achievement, stereotypes, and math self-concepts among elementary-school students in Singapore. Learn. Instr. 2015, 39, 1–10.
  42. Glock, S.; Kleen, H. Gender and student misbehavior: Evidence from implicit and explicit measures. Teach. Teach. Educ. 2017, 67, 93–103.
  43. Jussim, L.; Eccles, J. Teacher expectations: II. Construction and reflection of student achievement. J. Personal. Soc. Psychol. 1992, 63, 947–961.
  44. Greenwald, A.G.; Banaji, M.R. Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychol. Rev. 1995, 102, 4–27.
  45. Schneider, D.J. The Psychology of Stereotyping; Guilford Press: New York, NY, USA, 2004.
  46. Macrae, C.N.; Milne, A.B.; Bodenhausen, G.V. Stereotypes as energy-saving devices: A peek inside the cognitive toolbox. J. Personal. Soc. Psychol. 1994, 66, 37–47.
  47. Van Knippenberg, A.; Dijksterhuis, A.; Vermeulen, D. Judgement and memory of a criminal act: The effects of stereotypes and cognitive load. Eur. J. Soc. Psychol. 1999, 29, 191–201.
  48. Fiske, S.T.; Neuberg, S.L. A continuum of impression formation, from category-based to individuating processes: Influences of information and motivation on attention and interpretation. Adv. Exp. Soc. Psychol. 1990, 23, 1–74.
  49. Campbell, D.T. Stereotypes and the perception of group differences. Am. Psychol. 1967, 22, 817–829.
  50. Muntoni, F.; Retelsdorf, J. Gender-specific teacher expectations in reading—The role of teachers’ gender stereotypes. Contemp. Educ. Psychol. 2018, 54, 212–220.
  51. Ellmers, N. Gender stereotypes. Annu. Rev. Psychol. 2018, 69, 275–298.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : ,
View Times: 95
Revisions: 2 times (View History)
Update Date: 17 Jan 2024
Video Production Service