Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 4274 2023-07-12 13:00:13 |
2 layout & references Meta information modification 4274 2023-07-14 02:51:09 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Gonthier, C. Potential Effects of Time Pressure on Intelligence Tests. Encyclopedia. Available online: https://encyclopedia.pub/entry/46695 (accessed on 26 July 2024).
Gonthier C. Potential Effects of Time Pressure on Intelligence Tests. Encyclopedia. Available at: https://encyclopedia.pub/entry/46695. Accessed July 26, 2024.
Gonthier, Corentin. "Potential Effects of Time Pressure on Intelligence Tests" Encyclopedia, https://encyclopedia.pub/entry/46695 (accessed July 26, 2024).
Gonthier, C. (2023, July 12). Potential Effects of Time Pressure on Intelligence Tests. In Encyclopedia. https://encyclopedia.pub/entry/46695
Gonthier, Corentin. "Potential Effects of Time Pressure on Intelligence Tests." Encyclopedia. Web. 12 July, 2023.
Potential Effects of Time Pressure on Intelligence Tests
Edit

Intelligence tests are often performed under time constraints for practical reasons, but the effects of time pressure on reasoning performance are poorly understood. A brief review of major expected effects of time pressure is presented herein, which includes forcing participants to skip items, convoking a mental speed factor, constraining response times, qualitatively altering cognitive processing, affecting anxiety and motivation, and interacting with individual differences. 

intelligence time pressure Raven’s Advanced Progressive Matrices (APM) mental speed item response times

1. Introduction

Tests of fluid intelligence (Gf) can be administered either untimed, or with a time constraint (usually at the test level, but sometimes as an item-level deadline: e.g., Kyllonen et al. 2018). Any investigator interested in measuring fluid intelligence has to decide between these two options. The choice is not an easy one, as it depends on how exactly measurement will be affected by time pressure.
Raven’s matrices, as the test most representative of fluid intelligence (Carpenter et al. 1990), are a good illustration of the dilemma. On one hand, the test was explicitly designed to be completed untimed. John C. Raven (1938) noted that the progressive matrices “cannot be given satisfactorily with a time-limit”; John Raven (2008) remarked that “it would not make sense to set a time limit within which people have to show how high they can jump whilst also insisting that that they start by jumping over the lowest bar. Clearly, the most able would not be able to demonstrate their prowess […] it also follows that it makes no sense to time the test”.
On the other hand, a long testing time is an obstacle in many situations: a few participants have prolonged a testing session for over an hour trying to solve every single item in Raven’s Advanced Progressive Matrices (APM), which is psychologically interesting but logistically troublesome. This quickly led investigators to experiment with time limits (e.g., Bolton 1955). Short forms were developed (Arthur and Day 1994; Bilker et al. 2012; Bors and Stokes 1998); various time limits were tested (Hamel and Schmittmann 2006), and norms were ultimately made available for different time limits (Raven et al. 1998). The end result is that as with most intelligence tests (Wilhelm and Schulze 2002), in contemporary assessment, Raven’s matrices are often administered with a time constraint.
Is imposing time pressure a good or a bad thing? Time pressure has a limited detrimental effect on discriminating power (a reasonable time limit still allows most participants to finish most items, save for the final and most difficult items, which tend to have low success rates anyway; e.g., Bolton 1955), on reliability (e.g., Bolton 1955; Poulton et al. 2022; see also Hong and Cheng 2019), and on the dimensional structure (Poulton et al. 2022) of Raven’s matrices. However, this limited impact on basic psychometric properties does not mean that versions with or without a time limit are equivalent (e.g., Davidson and Carroll 1945; Rindler 1979). A more important question is whether time pressure impacts the validity of the task.
Time pressure can constitute a major threat to validity (Lu and Sireci 2007); this point has been recognized for a long time (Cronbach 1949). A speeded version of Raven’s matrices tends to correlate very well with the same task performed without a time limit (Hamel and Schmittmann 2006), but this is not the only aspect of validity. Time pressure may affect the response processes which translate individual differences of reasoning ability into differences of performance (Borsboom et al. 2004; Borsboom and Mellenbergh 2007). In other words, if forcing participants to respond faster changes the way items are processed, in such a way that performance is less dependent on the reasoning processes the task is supposed to be measuring, then a time limit should not be used. A meta-analysis based on Raven’s matrices indicated that using a time limit substantially changes correlations between reasoning performance other constructs, suggesting that response processes are indeed affected by time pressure (Tatel et al. 2020).
The literature has extensively covered various aspects of the effect of a time pressure on response processes and validity in intelligence tasks (e.g., Kyllonen and Zu 2016). Six main potential effects of a time pressure (and potential threats on task validity) can be listed: (1) preventing completion of certain items, (2) involving an additional contribution of mental speed, (3) constraining response times on items, (4) modifying aspects of cognitive processing of the items, (5) affecting psycho-affective variables such as test anxiety and motivation, and (6) differentially affecting individuals as a function of individual abilities (e.g., working memory). These potential effects of time pressure overlap to an extent (e.g., constraining response times may force qualitative changes in item processing).

2. Potential Effects of Time Pressure

2.1. Effect 1: Time Pressure Leads to Skipping Items

When performing an intelligence test under time pressure, some participants may lack enough time to finish the task. The task is then interrupted before completion, which means some items are never reached and never attempted by the participant, leading to a lower score. This means that a participant’s score no longer necessarily reflects their maximal level of reasoning performance (e.g., Goldhammer 2015), in the sense of the maximum number of problems they should have been able to solve given their level of intellectual ability (see also Raven 2008).
This effect of time pressure on the omission of some problems has been the most discussed by classic psychometrics. It constitutes the basis of statistics that aim to summarize the effects of speededness based on the amount of items not reached by participants (e.g., Cronbach and Warrington 1951; Gulliksen 1950b; Stafford 1971). A similar rationale is implicit in factor analyses estimating a speededness factor based on the last, but not the first items (Borter et al. 2020; Estrada et al. 2017), in factor analyses assigning a loading on the speededness factor that increases with item serial position (e.g., Schweizer and Ren 2013), in attempts to estimate processing speed based on the number of omitted items (e.g., Schweizer et al. 2019a), and in the finding of poorer model fit for later items (Oshima 1994).
One major challenge with the omission of certain items is that it could interact with test-taking strategies. Indeed, some participants may deliberately decide to spend enough time on early problems, with the risk of running out of time and having to skip later items, whereas others may prefer to proceed quickly throughout the whole test (Goldhammer 2015; Semmes et al. 2011). These test-taking strategies may possibly interact with individual differences, with more able participants being more skilled at managing their time and selectively speeding up or slowing down depending on item difficulty and remaining time (van der Linden 2009). It is also noteworthy that some participants may choose to keep a margin of security, leading them not to use all the time they have available and finish a test or item before the deadline (see Bolsinova and Tijmstra 2015). Conversely, there may be individual catch-up phenomena, so that participants speed on early items but selectively slow down later when they have time left on the counter.

2.2. Effect 2: Time Pressure Taps into a Speed Factor

Intelligence tests administered with a speed constraint tend to yield results that correlate well with an untimed version of the same task (Preckel et al. 2011; Vernon et al. 1985; Wilhelm and Schulze 2002), which suggests that despite shifting the focus from a pure power test to a mix of power and speed (Gulliksen 1950a), speededness does not radically alter the nature of the task. However, speeded intelligence tests tend to give rise to a speed factor in factor analysis (Ren et al. 2018; see also Estrada et al. 2017; Schweizer and Ren 2013), and there are indications that scores on a speeded reasoning test are a composite of unspeeded reasoning and processing speed (Wilhelm and Schulze 2002). Conversely, taking into account participant speed can improve model fit in confirmatory factor analysis of speeded reasoning tasks (Schweizer and Ren 2013; Schweizer et al. 2019a, 2019b; see also Semmes et al. 2011; Wollack et al. 2003). More generally, speeded reasoning tasks tend to correlate better with other speeded than unspeeded measures (Wilhelm and Schulze 2002). These results all suggest that imposing a time limit in a matrix task invokes an additional contribution of mental speed.
Some theorists may consider the involvement of mental speed as a good thing. Many studies have shown a substantial correlation between tests of mental speed and performance on reasoning tests (both speeded and unspeeded: Vernon et al. 1985; Vernon and Kantor 1986). For this reason, mental speed may be viewed as an instrumental ability that supports the operation of intelligence: faster participants may, for example, be better able to maintain information relevant to logical reasoning in working memory before it decays. Along those lines, mental speed has long been investigated as a possible contributor to individual differences in reasoning performance (e.g., Ackerman et al. 2002; Conway et al. 2002; Vernon 1983), as well as a contributor to the development of intelligence in childhood (Coyle 2013; Demetriou et al. 2013; Fry and Hale 1996, 2000; Kail and Salthouse 1994; Kail 2000, 2007) and its decrease in aging (Babcock 1994; Salthouse 1992, 1996).
Alternatively, some authors view processing speed as a fundamental component of intelligence (e.g., Vernon 1983): Jensen in particular speculated that processing speed could reflect basic differences at the neurological level, which could constitute a major underpinning of the general factor g (Jensen 1993, 1998). A related argument comes from the factor structure of intelligence: the Cattell–Horn–Carroll (CHC) theory of cognitive abilities explicitly includes speed factors as broad abilities under the general factor (McGrew 2009; Schneider and McGrew 2018; see also McGrew 2023). This view makes mental speed an integral part of intelligence as a construct, and if mental speed is part of what we mean by “intelligence”, then forcing participants to work quickly should just tap into an additional dimension of intelligence, leaving task validity unaltered or even enhanced.
This argument has multiple problems, however. First, the observed correlation between mental speed and intelligence does not necessarily imply an important causal status for mental speed (e.g., Schubert et al. 2018), and it is doubtful whether mental speed actually has real-life implications that make it worth measuring (Kyllonen and Zu 2016). Second, imposing a time limit and contaminating an intelligence test with speed-related variance can spuriously inflate correlations with other constructs also measured under time constraints (e.g., Ackerman et al. 2002; Engle and Kane 2004; Tatel et al. 2020). Third, although cognitive psychology often presents “mental speed” as a unitary ability, it is in fact a complex multidimensional construct (see Danthiir et al. 2005; Roberts and Stankov 1999; see also Draheim et al. 2019, for a discussion of measurement issues). As a result, the CHC theory comprises multiple factors related to speed: processing speed in simple cognitive tasks (Gs), reaction and decision speed for elementary single items (Gt), speed in motor activities (Gps), and rate and fluency for retrieval of information stored in long-term memory (Gr). The relation between these factors (e.g., do they form a superordinate speed factor?) is currently unclear (Schneider and McGrew 2018). Moreover, the speed at which a complex reasoning task can be performed does not map cleanly on any CHC factor and probably taps into a mix of Gf and one or more of speed factors (including Gs, but also Gt in certain tasks, and possibly Gr which encompasses ideational fluency; see Schneider and McGrew 2018). Fourth, speed is not solely a question of ability and also depends on motivation, personality, and an individual’s speed-accuracy tradeoff (Shaw et al. 2020). Lastly, it is not even certain that the speed factor that appears under time constraints actually represents mental speed: in some cases, it may also reflect individual ability and individual strategies to deal with the time pressure (Davison et al. 2012; Semmes et al. 2011) or a different construct altogether such as a form of rule generation fluency (Verguts et al. 1999). In short, imposing a time limit to a reasoning task and convoking a speed factor make the measure less tractable overall.

2.3. Effect 3: Time Pressure Constrains Response Times

Time pressure naturally encourages speeding in the task and therefore constrains the amount of time that can be spent on a given item. This may be viewed as a threat for validity or not, depending on whether a high speed of responding is taken as a reflection of high intelligence. As noted by Schneider and McGrew (2018), “the speed metaphor is often used in synonyms for smart (e.g., quick-witted)”. In this view, it is inherently desirable to solve intellectual problems more quickly: if two participants have the same accuracy, it makes intuitive sense to believe that the faster one is more intelligent (Thorndike et al. 1926). This approach considers speed as an integral aspect of performance in the task. One way to take this into account is to use composite scores that combine accuracy and speed (e.g., Bruyer and Brysbaert 2011; Dennis and Evans 1996; another example is found in certain subtests of Wechsler scales, which give bonus points for quick answers) or to jointly model accuracy and response times (Goldhammer and Kroehne 2014; Klein Entink et al. 2009b).
With this perspective, the speed at which the response process is executed is an index of its effectiveness as much as the correctness of the response. Therefore, imposing a time limit and constraining time on task is not necessarily a problem (if the difficulties posed by problem complexity and limited time both challenge the same ability, then high-performing participants should be both faster and more accurate) and could even be viewed as an advantage (since a time limit constrains the response times of participants, this could make them more comparable in terms of accuracy: see Goldhammer 2015; see also Bolsinova and Tijmstra 2015).
However, this line of reasoning overlooks a critical aspect of solving complex intelligence tests: being fast is not necessarily a good thing. There are at least two ways to frame this idea. The first is to stress the fact that cognitive operations take time: limiting the amount of available time mechanically limits the number of operations that can be completed. Given that complex operations germane to fluid reasoning (such as rule induction) are constrained by simpler operations related to basic manipulation of information, time pressure is likely to affect complex operations to a greater extent (Salthouse 1996). The other important point is that speed is not only an index of effective reasoning: a low speed also reflects carefulness (Kyllonen and Zu 2016). In terms of cognitive processes, longer response times can largely reflect time spent for validation and evaluation of one’s response (Goldhammer and Klein Entink 2011); one study showed that participants who care more about the results tend to respond more slowly (Klein Entink et al. 2009a).
Empirical data have substantiated the idea that responding slowly can be positive. At the item level, an unpublished study of 159 participants with eye-tracking showed that longer fixations on a matrix problem were associated with better performance, which suggests that taking the time for reflection is beneficial (de Winter et al. 2021). At the task level, RTs tend to be positively correlated with ability estimates, which means better participants tend to be slower (DiTrapani et al. 2016; Goldhammer and Klein Entink 2011; Klein Entink et al. 2009b; Partchev and De Boeck 2012). In the case when participants give fast responses, speed is especially negatively correlated with success rate (Partchev and De Boeck 2012; note that this result was specific to Raven’s matrices and did not occur for a verbal analogies task).
Critically, the emphasis on slow responding appears to depend on ability and difficulty (Goldhammer et al. 2014). Participants with a higher level of ability and/or motivation tend to modulate their RTs as a function of problem difficulty and spend much longer on difficult problems (Perret and Dauvier 2018; Gonthier and Roulin 2020; see also Tancoš et al. 2023), suggesting that these require substantially more time to be solved correctly. In line with this view, the relation between RTs and accuracy is negative for easy problems but becomes less negative (Dodonova and Dodonov 2013) or even positive for more difficult problems (Becker et al. 2016; Goldhammer et al. 2015). In terms of processing, it is likely that complex problems, which involve more logical rules and more components on which to apply these rules, require more time to elaborate a correct answer. In short, responding slowly can also be characteristic of high performance, especially for difficult problems and high-ability participants. It is also worth recalling that not all groups respond at the same speed: forcing fast responses may be more detrimental to participants with a slower response speed, such as young children (Borter et al. 2020) and older adults (Salthouse 1996).

2.4. Effect 4: Time Pressure Can Affect Cognitive Processing

Encouraging speeding when responding to a problem may conceivably affect cognitive processing, above and beyond limiting the amount of processing that can be performed. A few studies have even suggested that fast responses to an intelligence test involve a different ability or process than slow responses (Partchev and De Boeck 2012; DiTrapani et al. 2016), although no information was provided regarding the nature of this ability. There are multiple pathways by which cognitive processing could be affected.
At the item level, one possible way to conceptualize the possible effects of time pressure is to think of the response process in a reasoning task as a drift-diffusion model (e.g., Frischkorn and Schubert 2018; Kang et al. 2022; Lerche et al. 2020; van der Maas et al. 2011). This class of models considers that when confronted with a problem, participants continuously accumulate evidence in a random walk process (modeled as a constant drift rate in the direction of the response, plus noise), until they reach a decision threshold. Encouraging participants to speed their responding due to a time limit could force them to lower their decision threshold, interfering with verification of their response as discussed in the previous section (Goldhammer and Klein Entink 2011; Klein Entink et al. 2009a; Kyllonen and Zu 2016). This would translate as faster RTs, lower accuracy, and lower confidence in one’s response.
Apart from a change of decision threshold, time pressure could also force participants to accumulate information at a higher rate. Based on the decision-making literature, this could translate into several effects in terms of cognitive processing (Johnson et al. 1993; see also Ben Zur and Breznitz 1981; Wright 1974), including acceleration (performing the same cognitive operations more quickly), filtration of information (considering less information before making a decision; see also Salthouse 1996), or a change of strategy (tackling the task in a qualitatively different way). Acceleration or filtration would translate as faster responses in the task and lower accuracy; filtration in particular could also translate as lower accuracy conditional on RT, i.e., lower accuracy for the same RT, owing to the qualitatively different nature of information processing.
As for changes of strategy, there has been little study of the effects of time pressure on strategy use in intelligence tests, but such effects seem especially likely. Participants in complex learning tasks tend to switch to faster or more simple strategies under time pressure (see Chuderski 2016); the same phenomenon is observed in mathematics tasks (Caviola et al. 2017) and is assumed to occur in working memory tasks (Friedman and Miyake 2004; Lépine et al. 2005; St Clair-Thompson 2007; Thomassin et al. 2015). In the context of a matrix task, a change of strategy could mean turning away from the effective constructive matching strategy (Chuderski 2016), which relies on the time-intensive process of reconstructing the correct answer by integrating all information in an item, to the less costly strategy of response elimination, which relies on testing each possible answer in turn to see if it seems to superficially fit the matrix (for a review, see Laurence and Macedo 2022; see also Bethell-Fox et al. 1984; Snow 1980). There is also substantial evidence that participants often adopt a strategy of rapid guessing when under severe time constraints (Attali 2005; Jin et al. 2023; Schnipke and Scrams 1997; Schweizer et al. 2021), which would mean turning away from both constructive matching and response elimination. Critically, rapid guessing may not be constant across groups and across individuals (e.g., Must and Must 2013), providing another source of potential individual differences.
The effects of time pressure on cognitive processing of a given item may also go beyond what can be modeled at the item level: time pressure could also be expected to negatively affect learning, disrupting performance in a cumulative fashion over the course of the task. Learning is an important aspect of performance in Raven’s matrices: participants discover logical rules over simple items and then generalize them over more complex items presented later in the test (Ren et al. 2014; Verguts and De Boeck 2002), either explicitly or as a form of implicit or associative learning (Ren et al. 2014). One study has suggested that time pressure is detrimental to learning in a matrix task (Chuderski 2016), possibly because giving faster responses on early items means participants process logical rules more superficially, in a way that impedes transfer to more difficult items. This mechanism could contribute to selectively increasing the detrimental effect of time pressure on items presented towards the end of a test, although the particular design of this study (with participants completing two samples of items in the task in succession, without then with time pressure) makes it difficult to know if this effect would occur under more classic testing conditions.

2.5. Effect 5: Time Pressure Can Affect Anxiety and Motivation

Apart from direct effects due to the time restriction, it is also possible that the pressure itself has an effect on accuracy. Studies from the decision-making literature have suggested that participants perform worse under a time pressure, not only when there is an actual time restriction (Cella et al. 2007) but also when there is a perceived time pressure, even in the absence of any time manipulation (DeDonno and Demaree 2008).
This phenomenon could be partly due to an effect of pressure on constructs related to intelligence: for instance, time pressure could decrease participant motivation to complete the task. One study showed that participants who had to complete a reasoning task under an explicit time pressure were less intrinsically motivated, as reflected in both lower ratings of interest and less time spent voluntarily engaging with the task materials after the end of the testing session (Amabile et al. 1976). Under this view, time pressure could also conceivably change the relation between performance and motivation (see Kuhn and Ranger 2015).
Perceived time pressure could also create stress or test anxiety in participants (e.g., Sussman and Sekuler 2022). This could interfere with performance in several ways, such as creating worrisome thoughts which use up resources in working memory (Eysenck and Calvo 1992; for other examples, see Ashcraft and Kirk 2001; Moran 2016), although this mechanism is disputed (Kellogg et al. 1999). This process has been mostly studied in the related contexts of academic achievement and math anxiety (Caviola et al. 2017) and may also occur with intelligence tests. Time pressure could also conceivably interact with individual differences in anxiety: in the case of math reasoning, removing time pressure is sometimes observed to selectively increase performance for more anxious participants (Plass and Hill 1986), although this is not always the case (Kellogg et al. 1999; see also Traub and Hambleton 1972).

2.6. Effect 6: Differential Effects of Time Pressure

Although time pressure does not seem to affect the relative position (rank-ordering) of participants to a large extent (Preckel et al. 2011; Vernon et al. 1985; Wilhelm and Schulze 2002), time pressure could still be expected to interact with individual differences in ability in absolute terms so that the distance between high-ability and low-ability participants varies as a function of time pressure. A situation often observed in reasoning tasks is the choking under pressure effect, wherein imposing a pressure (such as instructions emphasizing the measurement of intelligence, the addition of social pressure, dual tasking, etc.) creates a larger decrement of performance for high-performing participants, especially those with high working memory capacity (WMC; Gimmig et al. 2006; for examples with math tests, see Beilock and Carr 2005; Beilock and DeCaro 2007). Choking under pressure could also occur with time pressure, decreasing the distance between low- and high-ability participants.
The same effect could occur with WMC, instead of ability: time pressure has been observed to decrease the distance between low- and high WMC participants (Colom et al. 2015), which could be problematic given that WMC is one of the major correlates of intelligence. On the other hand, the opposite effect has also been reported: it has been argued that speeded intelligence tests have higher correlations with WMC (Chuderski 2013, 2015; Tatel et al. 2020) because time pressure requires participants to integrate all information in working memory, leaving no time to decompose the problem. This would lead to time pressure increasing the distance between low- and high-ability participants. This finding however was not replicated in other studies (Colom et al. 2015; see also Ren et al. 2018).
Apart from WMC, there is suggestive evidence that time pressure could increase the relation between performance in Raven’s matrices and spatial abilities (Tatel et al. 2020). A differential effect of time pressure could also conceivably be found with other constructs, such as motivation: given that more motivated participants tend to spend longer on problems (e.g., Wise and Kong 2005), imposing a time pressure could selectively decrease the performance of participants with high motivation. Lastly, a differential effect could be found as a function of mental speed and more generally as a function of age: time pressure could disproportionately affect younger children with low mental speed (Borter et al. 2020) and possibly older adults although this is not necessarily the case in practice (Babcock 1994).
Given the fact that high-ability participants tend to modulate their RTs to spend selectively more time on more difficult items (Gonthier and Roulin 2020; Perret and Dauvier 2018; Tancoš et al. 2023), all these possible differential effects might also be expected to interact with item difficulty: if time pressure affects high-ability participants to a larger extent, it may be even more true for the most difficult items. However, RT modulation in the face of difficulty is a relatively new topic in the literature, and this possibility has not been tested.

References

  1. Kyllonen, Patrick, Robert Hartman, Amber Sprenger, Jonathan Weeks, Maria Bertling, Kevin McGrew, Sarah Kriz, Jonas Bertling, James Fife, and Lazar Stankov. 2018. General fluid/inductive reasoning battery for a high-ability population. Behavior Research Methods 51: 507–22.
  2. Carpenter, Patricia A., Marcel A. Just, and Peter Shell. 1990. What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review 97: 404–31.
  3. Raven, John C. 1938. Progressive Matrices. London: H. K. Lewis and Co.
  4. Raven, J. 2008. General introduction and overview: The Raven Progressive Matrices Tests: Their theoretical basis and measurement model. In Uses and Abuses of Intelligence: Studies Advancing Spearman and Raven’s Quest for Non-Arbitrary Metrics. Competency Motivation Project. EDGE 2000. Romanian Psychological Testing Services SRL. Edited by John Raven and Jean Raven. Unionville: Royal Fireworks Press.
  5. Bolton, Floyd B. 1955. Experiments with The Raven’s Progressive Matrices—1938. The Journal of Educational Research 48: 629–34.
  6. Arthur, Winfred, and David V. Day. 1994. Development of a Short form for the Raven Advanced Progressive Matrices Test. Educational and Psychological Measurement 54: 394–403.
  7. Bilker, Warren B., John A. Hansen, Colleen M. Brensinger, Jan Richard, Raquel E. Gur, and Ruben C. Gur. 2012. Development of Abbreviated Nine-Item Forms of the Raven’s Standard Progressive Matrices Test. Assessment 19: 354–69.
  8. Bors, Douglas A., and Tonya L. Stokes. 1998. Raven’s Advanced Progressive Matrices: Norms for First-Year University Students and the Development of a Short Form. Educational and Psychological Measurement 58: 382–98.
  9. Hamel, Ronald, and Verena D. Schmittmann. 2006. The 20-Minute Version as a Predictor of the Raven Advanced Progressive Matrices Test. Educational and Psychological Measurement 66: 1039–46.
  10. Raven, John, John C. Raven, and John H. Court. 1998. Raven Manual: Section 4, Advanced Progressive Matrices. Oxford: Oxford Psychologists Press.
  11. Wilhelm, Oliver, and Ralf Schulze. 2002. The relation of speeded and unspeeded reasoning with mental speed. Intelligence 30: 537–54.
  12. Poulton, Antoinette, Kathleen Rutherford, Sarah Boothe, Madeleine Brygel, Alice Crole, Gezelle Dali, Loren Richard Bruns Jr, Richard O. Sinnott, and Robert Hester. 2022. Evaluating untimed and timed abridged versions of Raven’s Advanced Progressive Matrices. Journal of Clinical and Experimental Neuropsychology 44: 73–84.
  13. Hong, Maxwell R., and Ying Cheng. 2019. Clarifying the Effect of Test Speededness. Applied Psychological Measurement 43: 611–23.
  14. Davidson, William M., and John B. Carroll. 1945. Speed and Level Components in Time-Limit Scores: A Factor Analysis. Educational and Psychological Measurement 5: 411–27.
  15. Rindler, Susan Ellerin. 1979. Pitfalls in assessing test speededness. Journal of Educational Measurement 16: 261–70.
  16. Lu, Ying, and Stephen G. Sireci. 2007. Validity Issues in Test Speededness. Educational Measurement: Issues and Practice 26: 29–37.
  17. Cronbach, Lee J. 1949. Essentials of Psychological Testing. New York: Harper and Brothers.
  18. Borsboom, Denny, Gideon J. Mellenbergh, and Jaap van Heerden. 2004. The Concept of Validity. Psychological Review 111: 1061–71.
  19. Borsboom, Denny, and Gideon J. Mellenbergh. 2007. Test validity and cognitive assessment. In Cognitive Diagnostic Assessment for Education: Theory and Applications. Edited by Jacqueline Leighton and Mark Gierl. Cambridge: Cambridge University Press, pp. 85–116.
  20. Tatel, Corey E., Zachary R. Tidler, and Phillip L. Ackerman. 2020. Process differences as a function of test modifications: Construct validity of Raven’s advanced progressive matrices under standard, abbreviated and/or speeded conditions—A meta-analysis. Intelligence 90: 101604.
  21. Kyllonen, Patrick C., and Jiyun Zu. 2016. Use of Response Time for Measuring Cognitive Ability. Journal of Intelligence 4: 14.
  22. Goldhammer, Frank. 2015. Measuring Ability, Speed, or Both? Challenges, Psychometric Solutions, and What Can Be Gained From Experimental Control. Measurement: Interdisciplinary Research and Perspectives 13: 133–64.
  23. Cronbach, Lee J., and W. G. Warrington. 1951. Time-limit tests: Estimating their reliability and degree of speeding. Psychometrika 16: 167–88.
  24. Gulliksen, Harold. 1950b. The reliability of speeded tests. Psychometrika 15: 259–69.
  25. Stafford, Richard E. 1971. The Speededness Quotient: A New Descriptive Statistic for Tests. Journal of Educational Measurement 8: 275–77.
  26. Borter, Natalie, Annik E. Völke, and Stefan J. Troche. 2020. The development of inductive reasoning under consideration of the effect due to test speededness. Psychological Test and Assessment Modeling 62: 344–58.
  27. Estrada, Eduardo, Francisco J. Román, Francisco J. Abad, and Roberto Colom. 2017. Separating power and speed components of standardized intelligence measures. Intelligence 61: 159–68.
  28. Schweizer, Karl, and Xuezhu Ren. 2013. The position effect in tests with a time limit: The consideration of inter-ruption and working speed. Psychological Test and Assessment Modeling 55: 62–78.
  29. Schweizer, Karl, Siegbert Reiß, and Stefan Troche. 2019a. Does the Effect of a Time Limit for Testing Impair Structural Investigations by Means of Confirmatory Factor Models? Educational and Psychological Measurement 79: 40–64.
  30. Oshima, T. C. 1994. The Effect of Speededness on Parameter Estimation in Item Response Theory. Journal of Educational Measurement 31: 200–19.
  31. Semmes, Robert, Mark L. Davison, and Catherine Close. 2011. Modeling Individual Differences in Numerical Reasoning Speed as a Random Effect of Response Time Limits. Applied Psychological Measurement 35: 433–46.
  32. van der Linden, Wim J. 2009. Conceptual issues in response-time modeling. Journal of Educational Measurement 46: 247–72.
  33. Bolsinova, Maria, and Jesper Tijmstra. 2015. Can Response Speed Be Fixed Experimentally, and Does This Lead to Unconfounded Measurement of Ability? Measurement: Interdisciplinary Research and Perspectives 13: 165–68.
  34. Preckel, Franzis, Christina Wermer, and Frank M. Spinath. 2011. The interrelationship between speeded and unspeeded divergent thinking and reasoning, and the role of mental speed. Intelligence 39: 378–88.
  35. Vernon, Philip A., Sue Nador, and Lida Kantor. 1985. Reaction times and speed-of-processing: Their relationship to timed and untimed measures of intelligence. Intelligence 9: 357–74.
  36. Gulliksen, Harold. 1950a. Speed versus power tests. In Theory of mental tests. Edited by Harold Gulliksen. Hoboken: John Wiley & Sons Inc., pp. 230–44.
  37. Ren, Xuezhu, Tengfei Wang, Sumin Sun, Mi Deng, and Karl Schweizer. 2018. Speeded testing in the assessment of intelligence gives rise to a speed factor. Intelligence 66: 64–71.
  38. Schweizer, Karl, Siegbert Reiß, Xuezhu Ren, Tengfei Wang, and Stefan J. Troche. 2019b. Speed Effect Analysis Using the CFA Framework. Frontiers in Psychology 10: 239.
  39. Wollack, James A., Allan S. Cohen, and Craig S. Wells. 2003. A Method for Maintaining Scale Stability in the Presence of Test Speededness. Journal of Educational Measurement 40: 307–30.
  40. Vernon, Philip A., and Lida Kantor. 1986. Reaction time correlations with intelligence test scores obtained under either timed or untimed conditions. Intelligence 10: 315–30.
  41. Ackerman, Phillip L., Margaret E. Beier, and Mary D. Boyle. 2002. Individual differences in working memory within a nomological network of cognitive and perceptual speed abilities. Journal of Experimental Psychology: General 131: 567–89.
  42. Conway, Andrew R. A., Nelson Cowan, Michael F. Bunting, David J. Therriault, and Scott R. B. Minkoff. 2002. A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 30: 163–83.
  43. Vernon, Philip A. 1983. Speed of information processing and general intelligence. Intelligence 7: 53–70.
  44. Coyle, Thomas R. 2013. Effects of processing speed on intelligence may be underestimated: Comment on Demetriou et al. (2013). Intelligence 41: 732–34.
  45. Demetriou, Andreas, George Spanoudis, Michael Shayer, Antigoni Mouyi, Smaragda Kazi, and Maria Platsidou. 2013. Cycles in speed-working memory-G relations: Towards a developmental–differential theory of the mind. Intelligence 41: 34–50.
  46. Fry, Astrid F., and Sandra Hale. 1996. Processing Speed, Working Memory, and Fluid Intelligence: Evidence for a Developmental Cascade. Psychological Science 7: 237–41.
  47. Fry, Astrid F., and Sandra Hale. 2000. Relationships among processing speed, working memory, and fluid intelligence in children. Biological Psychology 54: 1–34.
  48. Kail, Robert, and Timothy A. Salthouse. 1994. Processing speed as a mental capacity. Acta Psychologica 86: 199–225.
  49. Kail, Robert V. 2000. Speed of information processing: Developmental change and links to intelligence. Journal of School Psychology 38: 51–61.
  50. Kail, Robert V. 2007. Longitudinal Evidence That Increases in Processing Speed and Working Memory Enhance Children’s Reasoning. Psychological Science 18: 312–13.
  51. Babcock, Renée L. 1994. Analysis of adult age differences on the Raven’s Advanced Progressive Matrices Test. Psychology and Aging 9: 303–14.
  52. Salthouse, Timothy A. 1992. Influence of processing speed on adult age differences in working memory. Acta Psychologica 79: 155–70.
  53. Salthouse, Timothy A. 1996. The processing-speed theory of adult age differences in cognition. Psychological Review 103: 403–28.
  54. Jensen, Arthur R. 1993. Why Is Reaction Time Correlated With Psychometric g? Current Directions in Psychological Science 2: 53–56.
  55. Jensen, Arthur R. 1998. The g Factor: The Science of Mental Ability. Westport: Praeger Publishers/Greenwood Publishing Group.
  56. McGrew, Kevin S. 2009. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37: 1–10.
  57. Schneider, W. Joel, and Kevin S. McGrew. 2018. The Cattell-Horn-Carroll theory of cognitive abilities. In Contemporary Intellectual Assessment: Theories, Tests, and Issues, 4th ed. Edited by Dawn P. Flanagan and Erin M. McDonough. New York: The Guilford Press.
  58. McGrew, Kevin S. 2023. Carroll’s Three-Stratum (3S) Cognitive Ability Theory at 30 Years: Impact, 3S-CHC Theory Clarification, Structural Replication, and Cognitive–Achievement Psychometric Network Analysis Extension. Journal of Intelligence 11: 32.
  59. Schubert, Anna-Lena, Dirk Hagemann, Gidon T. Frischkorn, and Sabine C. Herpertz. 2018. Faster, but not smarter: An experimental analysis of the relationship between mental speed and mental abilities. Intelligence 71: 66–75.
  60. Engle, Randall W., and Michael J. Kane. 2004. Executive Attention, Working Memory Capacity, and a Two-Factor Theory of Cognitive Control. In The Psychology of Learning and Motivation: Advances in Research and Theory. Edited by Brian H. Ross. Amsterdam: Elsevier Science, vol. 44, pp. 145–99.
  61. Danthiir, Vanessa, Richard D. Roberts, Ralf Schulze, and Oliver Wilhelm. 2005. Mental Speed: On Frameworks, Paradigms, and a Platform for the Future. In Handbook of Understanding and Measuring Intelligence. Edited by Oliver Wilhelm and Randall W. Engle. Thousand Oaks: Sage Publications, Inc., pp. 27–46.
  62. Roberts, Richard D., and Lazar Stankov. 1999. Individual differences in speed of mental processing and human cognitive abilities: Toward a taxonomic model. Learning and Individual Differences 11: 1–120.
  63. Draheim, Christopher, Cody A. Mashburn, Jessie D. Martin, and Randall W. Engle. 2019. Reaction time in differential and developmental research: A review and commentary on the problems and alternatives. Psychological Bulletin 145: 508–35.
  64. Shaw, Amy, Fabian Elizondo, and Patrick L. Wadlington. 2020. Reasoning, fast and slow: How noncognitive factors may alter the ability-speed relationship. Intelligence 83: 101490.
  65. Davison, Mark L., Robert Semmes, Lan Huang, and Catherine N. Close. 2012. On the Reliability and Validity of a Numerical Reasoning Speed Dimension Derived From Response Times Collected in Computerized Testing. Educational and Psychological Measurement 72: 245–63.
  66. Verguts, Tom, Paul De Boeck, and Eric Maris. 1999. Generation speed in Raven’s progressive matrices test. Intelligence 27: 329–45.
  67. Thorndike, Edward L., Elsie Oschrin Bregman, Margaret Vara Cobb, and Ella Woodyard. 1926. The Measurement of Intelligence. New York: Teachers College Bureau of Publications.
  68. Bruyer, Raymond, and Marc Brysbaert. 2011. Combining Speed and Accuracy in Cognitive Psychology: Is the Inverse Efficiency Score (IES) a Better Dependent Variable than the Mean Reaction Time (RT) and the Percentage Of Errors (PE)? Psychologica Belgica 51: 5–13.
  69. Dennis, Ian, and Jonathan St B. T. Evans. 1996. The speed-error trade-off problem in psychometric testing. British Journal of Psychology 87: 105–29.
  70. Goldhammer, Frank, and Ulf Kroehne. 2014. Controlling Individuals’ Time Spent on Task in Speeded Performance Measures: Experimental time limits, posterior time limits, and response time modeling. Applied Psychological Measurement 38: 255–67.
  71. Klein Entink, Rinke H., Jörg-Tobias Kuhn, Lutz F. Hornke, and Jean-Paul Fox. 2009b. Evaluating cognitive theory: A joint modeling approach using responses and response times. Psychological Methods 14: 54–75.
  72. Goldhammer, Frank, and Rinke H. Klein Entink. 2011. Speed of reasoning and its relation to reasoning ability. Intelligence 39: 108–19.
  73. Klein Entink, Rinke H., Jean-Paul Fox, and Willem J. van der Linden. 2009a. A Multivariate Multilevel Approach to the Modeling of Accuracy and Speed of Test Takers. Psychometrika 74: 21–48.
  74. de Winter, Joost C. F., Dimitra Dodou, and Yke B. Eisma. 2021. Calmly Digesting the Problem: Eye Movements and Pupil Size while Solving Raven’s Matrices. Unpublished preprint. Researchgate. October 6.
  75. DiTrapani, Jack, Minjeong Jeon, Paul De Boeck, and Ivailo Partchev. 2016. Attempting to differentiate fast and slow intelligence: Using generalized item response trees to examine the role of speed on intelligence tests. Intelligence 56: 82–92.
  76. Partchev, Ivailo, and Paul De Boeck. 2012. Can fast and slow intelligence be differentiated? Intelligence 40: 23–32.
  77. Goldhammer, Frank, Johannes Naumann, Annette Stelter, Krisztina Tóth, Heiko Rölke, and Eckhard Klieme. 2014. The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology 106: 608–26.
  78. Perret, Patrick, and Bruno Dauvier. 2018. Children’s Allocation of Study Time during the Solution of Raven’s Progressive Matrices. Journal of Intelligence 6: 9.
  79. Gonthier, Corentin, and Jean-Luc Roulin. 2020. Intraindividual strategy shifts in Raven’s matrices, and their dependence on working memory capacity and need for cognition. Journal of Experimental Psychology: General 149: 564–79.
  80. Tancoš, Martin, Edita Chvojka, Michal Jabůrek, and Šárka Portešová. 2023. Faster ≠ Smarter: Children with Higher Levels of Ability Take Longer to Give Incorrect Answers, Especially When the Task Matches Their Ability. Journal of Intelligence 11: 63.
  81. Dodonova, Yulia A., and Yury S. Dodonov. 2013. Faster on easy items, more accurate on difficult ones: Cognitive ability and performance on a task of varying difficulty. Intelligence 41: 1–10.
  82. Becker, Nicolas, Florian Schmitz, Anja S. Göritz, and Frank M. Spinath. 2016. Sometimes More Is Better, and Sometimes Less Is Better: Task Complexity Moderates the Response Time Accuracy Correlation. Journal of Intelligence 4: 11.
  83. Goldhammer, Frank, Johannes Naumann, and Samuel Greiff. 2015. More is not Always Better: The Relation between Item Response and Item Response Time in Raven’s Matrices. Journal of Intelligence 3: 21–40.
  84. Frischkorn, Gidon T., and Anna-Lena Schubert. 2018. Cognitive Models in Intelligence Research: Advantages and Recommendations for Their Application. Journal of Intelligence 6: 34.
  85. Kang, Inhan, Paul De Boeck, and Ivailo Partchev. 2022. A randomness perspective on intelligence processes. Intelligence 91: 101632.
  86. Lerche, Veronika, Mischa von Krause, Andreas Voss, Gidon T. Frischkorn, Anna-Lena Schubert, and Dirk Hagemann. 2020. Diffusion modeling and intelligence: Drift rates show both domain-general and domain-specific relations with intelligence. Journal of Experimental Psychology: General 149: 2207–49.
  87. van der Maas, Han L. J., Dylan Molenaar, Gunter Maris, Rogier A. Kievit, and Denny Borsboom. 2011. Cognitive psychology meets psychometric theory: On the relation between process models for decision making and latent variable models for individual differences. Psychological Review 118: 339–56.
  88. Johnson, Eric J., John W. Payne, and James R. Bettman. 1993. Adapting to time constraints. In Time Pressure and Stress in Human Judgment and Decision Making. Edited by Ola Svenson and A. John Maule. New York: Springer.
  89. Ben Zur, Hasida, and Shlomo J. Breznitz. 1981. The effect of time pressure on risky choice behavior. Acta Psychologica 47: 89–104.
  90. Wright, Peter. 1974. The harassed decision maker: Time pressures, distractions, and the use of evidence. Journal of Applied Psychology 59: 555–61.
  91. Chuderski, Adam. 2016. Time pressure prevents relational learning. Learning and Individual Differences 49: 361–65.
  92. Caviola, Sara, Emma Carey, Irene C. Mammarella, and Denes Szucs. 2017. Stress, Time Pressure, Strategy Selection and Math Anxiety in Mathematics: A Review of the Literature. Frontiers in Psychology 8: 1488.
  93. Friedman, Naomi P, and Akira Miyake. 2004. The reading span test and its predictive power for reading comprehension ability. Journal of Memory and Language 51: 136–58.
  94. Lépine, Raphaë Lle, Parrouillet Pierre, and Valérie Camos. 2005. What makes working memory spans so predictive of high-level cognition? Psychonomic Bulletin & Review 12: 165–70.
  95. St Clair-Thompson, Helen L. 2007. The influence of strategies on relationships between working memory and cognitive skills. Memory 15: 353–65.
  96. Thomassin, Noémylle, Corentin Gonthier, Michel Guerraz, and Jean-Luc Roulin. 2015. The Hard Fall Effect: High working memory capacity leads to a higher, but less robust short-term memory performance. Experimental Psychology 62: 89–97.
  97. Laurence, Paulo Guirro, and Elizeu Coutinho Macedo. 2022. Cognitive strategies in matrix-reasoning tasks: State of the art. Psychonomic Bulletin & Review 30: 147–59.
  98. Bethell-Fox, Charles E., David F. Lohman, and Richard E. Snow. 1984. Adaptive reasoning: Componential and eye movement analysis of geometric analogy performance. Intelligence 8: 205–38.
  99. Snow, Richard E. 1980. Aptitude processes. In Aptitude, Learning, and Instruction: Cognitive Process Analyses of Aptitude. Edited by Richard E. Snow, Pat-Anthony Federico and William E. Montague. Hillsdale: Erlbaum, vol. 1, pp. 27–63.
  100. Attali, Yigal. 2005. Reliability of Speeded Number-Right Multiple-Choice Tests. Applied Psychological Measurement 29: 357–68.
  101. Jin, Kuan-Yu, Chia-Ling Hsu, Ming Ming Chiu, and Po-Hsi Chen. 2023. Modeling Rapid Guessing Behaviors in Computer-Based Testlet Items. Applied Psychological Measurement 47: 19–33.
  102. Schnipke, Deborah L., and David J. Scrams. 1997. Modeling Item Response Times With a Two-State Mixture Model: A New Method of Measuring Speededness. Journal of Educational Measurement 34: 213–32.
  103. Schweizer, Karl, Dorothea Krampen, and Brian F. French. 2021. Does rapid guessing prevent the detection of the effect of a time limit in testing? Methodology: European Journal of Research Methods for the Behavioral and Social Sciences 17: 168–88.
  104. Must, Olev, and Aasa Must. 2013. Changes in test-taking patterns over time. Intelligence 41: 780–90.
  105. Ren, Xuezhu, Tengfei Wang, Michael Altmeyer, and Karl Schweizer. 2014. A learning-based account of fluid intelligence from the perspective of the position effect. Learning and Individual Differences 31: 30–35.
  106. Verguts, Tom, and Paul De Boeck. 2002. The induction of solution rules in Raven’s Progressive Matrices Test. The European Journal of Cognitive Psychology 14: 521–47.
  107. Cella, Matteo, Simon Dymond, Andrew Cooper, and Oliver Turnbull. 2007. Effects of decision-phase time constraints on emotion-based learning in the Iowa Gambling Task. Brain and Cognition 64: 164–69.
  108. DeDonno, Michael A., and Heath A. Demaree. 2008. Perceived time pressure and the Iowa Gambling Task. Judgment and Decision Making 3: 636–40.
  109. Amabile, Teresa M., DeJong William, and Mark R. Lepper. 1976. Effects of externally imposed deadlines on subsequent intrinsic motivation. Journal of Personality and Social Psychology 34: 92–98.
  110. Kuhn, Jörg-Tobias, and Jochen Ranger. 2015. Measuring Speed, Ability, or Motivation: A Comment on Goldhammer. Measurement: Interdisciplinary Research and Perspectives 13: 173–76.
  111. Sussman, Rachel F., and Robert Sekuler. 2022. Feeling rushed? Perceived time pressure impacts executive function and stress. Acta Psychologica 229: 103702.
  112. Eysenck, Michael W., and Manuel G. Calvo. 1992. Anxiety and Performance: The Processing Efficiency Theory. Cognition and Emotion 6: 409–34.
  113. Ashcraft, Mark H., and Elizabeth P. Kirk. 2001. The relationships among working memory, math anxiety, and performance. Journal of Experimental Psychology: General 130: 224–37.
  114. Moran, Tim P. 2016. Anxiety and working memory capacity: A meta-analysis and narrative review. Psychological Bulletin 142: 831–64.
  115. Kellogg, Jeffry S., Derek R. Hopko, and Mark H. Ashcraft. 1999. The Effects of Time Pressure on Arithmetic Performance. Journal of Anxiety Disorders 13: 591–600.
  116. Plass, James A., and Kennedy T. Hill. 1986. Children’s achievement strategies and test performance: The role of time pressure, evaluation anxiety, and sex. Developmental Psychology 22: 31–36.
  117. Traub, Ross E., and Ronald K. Hambleton. 1972. The Effect of Scoring Instructions and Degree of Speededness on the Validity and Reliability of Multiple-Choice Tests1. Educational and Psychological Measurement 32: 737–58.
  118. Gimmig, David, Pascal Huguet, Jean-Paul Caverni, and François Cury. 2006. Choking under pressure and working memory capacity: When performance pressure reduces fluid intelligence. Psychonomic Bulletin & Review 13: 1005–10.
  119. Beilock, Sian L., and Thomas H. Carr. 2005. When High-Powered People Fail: Working memory and “choking under pres-sure” in math. Psychological Science 16: 101–5.
  120. Beilock, Sian L., and Marci S. DeCaro. 2007. From poor performance to success under stress: Working memory, strategy selection, and mathematical problem solving under pressure. Journal of Experimental Psychology: Learning, Memory, and Cognition 33: 983–98.
  121. Colom, Roberto, Jesús Privado, Luis F. García, Eduardo Estrada, Lara Cuevas, and Pei-Chun Shih. 2015. Fluid intelligence and working memory capacity: Is the time for working on intelligence problems relevant for explaining their large relationship? Personality and Individual Differences 79: 75–80.
  122. Chuderski, Adam. 2013. When are fluid intelligence and working memory isomorphic and when are they not? Intelligence 41: 244–62.
  123. Chuderski, Adam. 2015. The broad factor of working memory is virtually isomorphic to fluid intelligence tested under time pressure. Personality and Individual Differences 85: 98–104.
  124. Wise, Steven L., and Xiaojing Kong. 2005. Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests. Applied Measurement in Education 18: 163–83.
More
Information
Subjects: Psychology
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 726
Revisions: 2 times (View History)
Update Date: 14 Jul 2023
1000/1000
Video Production Service