The spirited and continuing debate on the scientific status of integrated assessment models (IAMs) of global climate change raises many methodological issues. Here researchers address the nature of the uncertainties encountered and their treatment in the modeling literature.
The debate on the scientific status of integrated assessment models (IAMs) of global climate change is spirited. For example, the Review of Environmental Economics and Policy
2017 symposium on IAMs focusing on climate change 
may leave the reader at a loss as to what should be believed. Metcalf and Stock 
argue that complicated IAMs, while in need of continuing improvement, are essential to informed policy making concerning climate change; Pindyck 
sees CC-IAMs as crucially flawed, fundamentally misleading, and in essence mere rhetorical devices; and 
Weyant sees value in CC-IAMs especially for “if …, then …” analysis to explore the implications of alternative model structures, parameterizations, and driver settings (see also 
). Among several kinds of challenges in modeling complex systems, we focus here on the nature of uncertainty and its treatment in IAMs.
2. The Distinction between Epistemic and Aleatory Uncertainty
Epistemic uncertainty. In a deterministic, non-chaotic system, there is by definition no role for chance, but there is the possibility of human ignorance. The perception of chance may arise from epistemic uncertainty: the incompleteness and imperfection of knowledge about how the system works. With no good model of the system, researchers may perceive arbitrariness or randomness in the data despite the determinism of the system that produced it. There are two kinds of epistemic uncertainty: structural and parametric. Structural uncertainty arises from imperfect mental models of the mechanisms involved. In IAMs, structural uncertainty may pertain to the complex interrelationships in the system under study (a concern amplified in complex systems modeling), and to matters familiar in other kinds of empirical/numerical work (e.g., functional forms of key relationships). Parametric uncertainty in deterministic systems arises from researchers' inadequate empirical knowledge to fully and accurately parameterize the system they are modeling.
Aleatory uncertainty. In a well-understood but stochastic system, there is, by definition, no epistemic uncertainty. Uncertainty is entirely aleatory: we face chance because we are not prescient. Despite knowing the relevant probabilities, we cannot know the next draw.
A system may exhibit both kinds of uncertainty. If the system is buffeted by chance and not well understood, statistical methods typically have difficulty isolating the contributions of epistemic and aleatory uncertainty to this unsatisfactory state of affairs. If the system is non-stationary, the drivers of regime shifts may have systematic properties but are likely also to be influenced by chance. There is no a priori reason to believe that the chance researchers encounter is entirely aleatory. Applying convenient stochastic specifications in this situation conflates more complex kinds of chance with ordinary risk. The crucial assumption, seldom given the attention it deserves, is that the system is fully understood or (equivalently) that the game is fully specified. Frequentist statistical logic, being addressed to the interpretation of data about the occurrence or not of specific events as the outcome of a stochastic process, is entirely about aleatory uncertainty. Probability is, to a frequentist, the frequency of a particular outcome if the experiment is repeated many times. Because so many statistical applications are aimed at learning about parameters and so reducing epistemic uncertainty, it is common in frequentist practice that some (reducible) epistemic uncertainty is analyzed, purists would say inappropriately, as aleatory 
Statisticians have long understood this dilemma. Carnap distinguished probability1 (credence, i.e., degree of belief) in contrast to probability2 (chance, which is mind independent, objective, and defined in terms of frequency) 
. Bayesian reasoning, being addressed to statements about the degree of belief in propositions, allows adjustment of probabilities in response to improved theories of how things work, better interpretations of empirical observations (e.g., better statistical models), and more observations. Decision theorists use probability to address imperfect knowledge, as well as the indeterminism of the systems. Not surprisingly, many decision theorists are attracted to Bayesian approaches where less prominence is accorded to the distinction between aleatory and epistemic uncertainty. For each proposition, there is a prior belief, perhaps well-informed by theory and/or previous observation but perhaps no more than a hunch. The prior belief is just the beginning: probabilities are adjusted repeatedly to reflect new evidence.
3. Uncertainty Involves More Than Stochasticity
Uncertain circumstances include:
Risk—in classical risk, the decision maker (DM) faces stochastic harm. The relevant probabilities are known and stationary, but the outcome of the next draw is not. The uncertainty is all aleatory.
Ambiguity—the relevant probabilities are not known. Ambiguity piles epistemic uncertainty on top of ordinary aleatory uncertainty.
Deep uncertainty, gross ignorance, unawareness, etc.—the DM may not be able to enumerate possible outcomes, let alone assign probabilities. Inability to enumerate possible outcomes suggests a rather serious case of epistemic uncertainty, but aleatory uncertainty is likely to exacerbate the confusion.
Surprises—in technical terms, the eventual outcome was not a member of the ex ante outcome set. The uncertainty that generates the possibility of a surprise is entirely epistemic: researchers failed to understand that the eventual outcome was possible. However, there likely are aleatory elements to its actual occurrence in a particular instance.
Researchers may expect to encounter the above sources of epistemic and aleatory uncertainty, and two additional kinds of uncertainty: regime shifts and policy uncertainty. Regime shifts are imperfectly anticipated discrete changes in the systems under study. The uncertainty likely includes epistemic and aleatory components. The epistemic component includes failure to comprehend the properties of the particular complex system, but it likely also that aleatory uncertainty adds noise to the signals in the data that, properly interpreted, might warn of impending regime shifts. A policy is a suite of driver settings intended to achieve desired outcomes, and decentralized agents experience policy uncertainty as epistemic—the “policy generator” works in ways not fully understood—but perhaps also aleatory if there are random influences on driver settings. Incomplete transparency muddies the perception of uncertainty and its attribution to epistemic and aleatory causes.
All of the above kinds of uncertainty may exist and affect the performance of the real-world system that researchers are modeling. There is recognition in the IAM literature that probabilities fail to represent uncertainty when ignorance is deep enough 
. Some modelers have suggested treating epistemic uncertainties as intervals and propagating epistemic and aleatory uncertainties through the model to the system response quantities of interest