Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1100 2023-06-02 20:01:04 |
2 layout -27 word(s) 1073 2023-06-05 07:21:00 |

Video Upload Options

Do you have a full video?


Are you sure to Delete?
If you have any further questions, please contact Encyclopedia Editorial Office.
Randall, A.; Ogland-Hand, J. Chance and Uncertainty in Integrated Assessment Models. Encyclopedia. Available online: (accessed on 17 June 2024).
Randall A, Ogland-Hand J. Chance and Uncertainty in Integrated Assessment Models. Encyclopedia. Available at: Accessed June 17, 2024.
Randall, Alan, Jonathan Ogland-Hand. "Chance and Uncertainty in Integrated Assessment Models" Encyclopedia, (accessed June 17, 2024).
Randall, A., & Ogland-Hand, J. (2023, June 02). Chance and Uncertainty in Integrated Assessment Models. In Encyclopedia.
Randall, Alan and Jonathan Ogland-Hand. "Chance and Uncertainty in Integrated Assessment Models." Encyclopedia. Web. 02 June, 2023.
Chance and Uncertainty in Integrated Assessment Models

The spirited and continuing debate on the scientific status of integrated assessment models (IAMs) of global climate change raises many methodological issues. Here researchers address the nature of the uncertainties encountered and their treatment in the modeling literature.

methodology deep uncertainty computer simulation integrated assessment modeling

1. Introduction

The debate on the scientific status of integrated assessment models (IAMs) of global climate change is spirited. For example, the Review of Environmental Economics and Policy 2017 symposium on IAMs focusing on climate change [1][2][3] may leave the reader at a loss as to what should be believed. Metcalf and Stock [1] argue that complicated IAMs, while in need of continuing improvement, are essential to informed policy making concerning climate change; Pindyck [2] sees CC-IAMs as crucially flawed, fundamentally misleading, and in essence mere rhetorical devices; and [3] Weyant sees value in CC-IAMs especially for “if …, then …” analysis to explore the implications of alternative model structures, parameterizations, and driver settings (see also [4]). Among several kinds of challenges in modeling complex systems, we focus here on the nature of uncertainty and its treatment in IAMs.

2. The Distinction between Epistemic and Aleatory Uncertainty

Epistemic uncertainty. In a deterministic, non-chaotic system, there is by definition no role for chance, but there is the possibility of human ignorance. The perception of chance may arise from epistemic uncertainty: the incompleteness and imperfection of knowledge about how the system works.  With no good model of the system, researchers may perceive arbitrariness or randomness in the data despite the determinism of the system that produced it. There are two kinds of epistemic uncertainty: structural and parametric. Structural uncertainty arises from imperfect mental models of the mechanisms involved. In IAMs, structural uncertainty may pertain to the complex interrelationships in the system under study (a concern amplified in complex systems modeling), and to matters familiar in other kinds of empirical/numerical work (e.g., functional forms of key relationships). Parametric uncertainty in deterministic systems arises from researchers' inadequate empirical knowledge to fully and accurately parameterize the system they are modeling.
Aleatory uncertainty. In a well-understood but stochastic system, there is, by definition, no epistemic uncertainty. Uncertainty is entirely aleatory: we face chance because we are not prescient. Despite knowing the relevant probabilities, we cannot know the next draw.
A system may exhibit both kinds of uncertainty. If the system is buffeted by chance and not well understood, statistical methods typically have difficulty isolating the contributions of epistemic and aleatory uncertainty to this unsatisfactory state of affairs. If the system is non-stationary, the drivers of regime shifts may have systematic properties but are likely also to be influenced by chance. There is no a priori reason to believe that the chance researchers encounter is entirely aleatory. Applying convenient stochastic specifications in this situation conflates more complex kinds of chance with ordinary risk. The crucial assumption, seldom given the attention it deserves, is that the system is fully understood or (equivalently) that the game is fully specified. Frequentist statistical logic, being addressed to the interpretation of data about the occurrence or not of specific events as the outcome of a stochastic process, is entirely about aleatory uncertainty. Probability is, to a frequentist, the frequency of a particular outcome if the experiment is repeated many times. Because so many statistical applications are aimed at learning about parameters and so reducing epistemic uncertainty, it is common in frequentist practice that some (reducible) epistemic uncertainty is analyzed, purists would say inappropriately, as aleatory [5].
Statisticians have long understood this dilemma. Carnap distinguished probability1 (credence, i.e., degree of belief) in contrast to probability2 (chance, which is mind independent, objective, and defined in terms of frequency) [6]. Bayesian reasoning, being addressed to statements about the degree of belief in propositions, allows adjustment of probabilities in response to improved theories of how things work, better interpretations of empirical observations (e.g., better statistical models), and more observations. Decision theorists use probability to address imperfect knowledge, as well as the indeterminism of the systems. Not surprisingly, many decision theorists are attracted to Bayesian approaches where less prominence is accorded to the distinction between aleatory and epistemic uncertainty. For each proposition, there is a prior belief, perhaps well-informed by theory and/or previous observation but perhaps no more than a hunch. The prior belief is just the beginning: probabilities are adjusted repeatedly to reflect new evidence.

3. Uncertainty Involves More Than Stochasticity

Uncertain circumstances include:
  • Risk—in classical risk, the decision maker (DM) faces stochastic harm. The relevant probabilities are known and stationary, but the outcome of the next draw is not. The uncertainty is all aleatory.
  • Ambiguity—the relevant probabilities are not known. Ambiguity piles epistemic uncertainty on top of ordinary aleatory uncertainty.
  • Deep uncertainty, gross ignorance, unawareness, etc.—the DM may not be able to enumerate possible outcomes, let alone assign probabilities. Inability to enumerate possible outcomes suggests a rather serious case of epistemic uncertainty, but aleatory uncertainty is likely to exacerbate the confusion. 
  • Surprises—in technical terms, the eventual outcome was not a member of the ex ante outcome set. The uncertainty that generates the possibility of a surprise is entirely epistemic: researchers failed to understand that the eventual outcome was possible. However, there likely are aleatory elements to its actual occurrence in a particular instance.
Researchers may expect to encounter the above sources of epistemic and aleatory uncertainty, and two additional kinds of uncertainty: regime shifts and policy uncertainty. Regime shifts are imperfectly anticipated discrete changes in the systems under study. The uncertainty likely includes epistemic and aleatory components. The epistemic component includes failure to comprehend the properties of the particular complex system, but it likely also that aleatory uncertainty adds noise to the signals in the data that, properly interpreted, might warn of impending regime shifts. A policy is a suite of driver settings intended to achieve desired outcomes, and decentralized agents experience policy uncertainty as epistemic—the “policy generator” works in ways not fully understood—but perhaps also aleatory if there are random influences on driver settings. Incomplete transparency muddies the perception of uncertainty and its attribution to epistemic and aleatory causes.
All of the above kinds of uncertainty may exist and affect the performance of the real-world system that researchers are modeling. There is recognition in the IAM literature that probabilities fail to represent uncertainty when ignorance is deep enough [7][8]. Some modelers have suggested treating epistemic uncertainties as intervals and propagating epistemic and aleatory uncertainties through the model to the system response quantities of interest [9].


  1. Metcalf, G.E.; Stock, J.H. Integrated Assessment Models and the Social Cost of Carbon: A Review and Assessment of U.S. Experience. Rev. Environ. Econ. Policy 2017, 11, 80–99.
  2. Pindyck, R. The use and misuse of models for climate policy. Rev. Environ. Econ. Policy 2017, 11, 100–114.
  3. Weyant, J. Contributions of integrated assessment models. Rev. Environ. Econ. Policy 2017, 11, 115–137.
  4. Nordhaus, W. Estimates of the social cost of carbon: Concepts and results from the DICE-2013R model and alternative approaches. J. Assoc. Environ. Resour. Econ. 2014, 1, 273–312.
  5. O’Hagan, T. Dicing with the Unknown. Significance 2004, 1, 132–133. Available online: (accessed on 19 May 2023).
  6. Carnap, R. Logical Foundations of Probability; University of Chicago Press: Chicago, IL, USA; London, UK, 1950.
  7. Halpern, J. Reasoning about Uncertainty; MIT Press: Cambridge, MA, USA, 2003.
  8. Norton, J. Ignorance and indifference. Philos. Sci. 2008, 75, 45–68.
  9. Roy C, Oberkampf W. 2010. A complete framework for verification, validation, and uncertainty quantification in scientific computing. Amer Inst Aeronautical Engineering. AIAA 2010-124.
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to : ,
View Times: 264
Revisions: 2 times (View History)
Update Date: 05 Jun 2023
Video Production Service