A frequently asked question in using climate model outputs for decision support is how to select and combine model outputs. The large ensemble of opportunity obtained from climate model output is often beyond the scope of most applications and raises several questions on how many models and which ones to use. Uncertainty in climate scenarios comes from (i) the drivers of anthropogenic change associated with greenhouse gas, aerosols and land use change; (ii) the sensitivity of the climate system to these changes; and (iii) natural unforced variability. There are several ways to account for uncertainty in climate scenarios depending on the phenomenon of interest, time horizon and scale of interest. Consideration of which climate aspects are most relevant to a given decision-making process can lead to a more tailored and relevant discussion of these uncertainties.
Uncertainty arising from anthropogenic forcing is less important than other sources of uncertainty before mid-century, but diverges substantially by late 21st century.
Uncertainty arising from climate sensitivity and the manifestation of anthropogenic change to regional scales is model dependent and a significant source of uncertainty.
Uncertainty arising from internal variability can be examined by looking at different ensemble members run with a common model and experiment (e.g., model runs differ only in that they have different initial conditions). This uncertainty can be particularly important for near-term projections and for variables with a low signal-to-noise ratio (e.g., precipitation).
Model evaluation has been conducted to assess how well CMIP5 models simulate historical climate of the Pacific Northwest (Rupp et al., 2013). This evaluation asks how well models simulate (i) monthly temperature and precipitation over the 20th century, (ii) spatial patterns over the northeast Pacific and western North America, (iii) teleconnection of the El Nino-Southern Oscillation to temperature and precipitation over the Pacific Northwest, (iv) interannual to decadal variability of precipitation and temperature over the Pacific Northwest, and (v) seasonal temperature trends. This analysis reveals a set of model rankings illustrated below and in further detail in (Rupp et al., 2013).
Figure 1: Models ranked according to normalized error score from EOF analysis of 18 performance metrics. Ranking is based on the first 6 principal components (filled blue circles). The open symbols show the models error scores using the first 4, 5, and all principal components (PCs). The best scoring model has a normalized error score of 0.
While evaluating model credibility is useful, studies have shown that climate projections from a random set of models yields results similar to those from the best models. Therefore it may not be absolutely necessary to cull or weight models before integrating them into decision-making process.
One should also be cognizant that using all available model outputs (ensembles of opportunity) will not encompass the range of potential futures.
However, incorporating a limited sample of projections is not suggested as it provides a limited sample.
Instead, we advocate for at least 10 models to be considered in analyses.
Guidelines from Mote et al., 2011
Mote et al., 2011 suggests the following guidelines for using climate model outputs for impact and climate diagnostics research:
Understand to which aspects of climate your problem or decision is most sensitive (e.g., which climate variables, which statistical measures of these variables, and at what space and time scales).
Determine which climate projection information is most appropriate for the problem or decision (e.g., variables, scales in space and time).
Understand the limitations of the method you select.
Obtain climate projections based on as many simulations, representing as many models and emissions scenarios, as possible. We recommend using at least 10 models.
It may be worth the effort to evaluate the relevant variables against observations, just to be cognizant of model biases, but recognize that most studies have found little or no difference in culling or weighting model outputs.
Understand that regional climate projection uncertainty stems from uncertainties about (1) the drivers of change (e.g., green- house gases, aerosols), (2) the response of the climate system to those drivers, and (3) the future trajectory of natural variability.
Use the ensemble to characterize consensus not only about the projected mean but also about the range and other aspects of variability.