“JC comment:  To me, the emails argue that there is insufficient traceability of the CMIP model simulations for the the IPCC authors to conduct a confident attribution assessment and at least some of the CMIP3 20th century simulations are not suitable for attribution studies.   The Uncertainty Monster rests its case (thank you, hacker/whistleblower).”

12/15/11, “Hegerl et al. react to the Uncertainty Monster paper,” Judith Curry, Climate Etc., JudithCurry.com

Hegerl et al. comment

Curry and Webster (2011) discuss the important topic of uncertainty in climate research. While we agree that it is very important that uncertainty is estimated and communicated appropriately, their discussion of the treatment of uncertainty in the IPCC assessment reports regarding attribution is inaccurate in a number of important respects.

IPCC has placed high priority on communicating uncertainty (Moss and Schneider, 2000; Mastrandrea et al., 2010, 2011). Since detection of climate change and attribution of causes deals with distinguishing ‘signals’ or ‘fingerprints’ of climate change from climate variability, an approach requiring substantial use of statistics (see Hegerl et al., 2007), this area of research has always placed high priority on estimating uncertainties appropriately. Hence the chapter on attributing causes to climate change of IPCC AR4 (Hegerl et al., 2007) discusses the uncertainty in its findings in detail, including in an overview table where remaining uncertainties are explicitly listed for each finding. In this brief comment we will limit our focus to the four key errors and misunderstandings in Curry and Webster (2011) regarding the treatment of uncertainty in the detection and attribution chapter of IPCC AR4:

1) The authors claim that ‘The 20th century aerosol forcing used in most of the AR4 model simulations (Section relies on inverse calculations of optical properties to match climate model simulations with observations’ and thus claim ‘apparent circular reasoning’. This is incorrect. The inverse estimates of aerosol forcing given in are derived from observationally based analyses of temperature and are compared in Chapter 9 with “forward” estimates calculated directly from understanding of the emissions in order to determine whether the two are consistent. But it is critical to understand that such inverse estimates are an output of attribution analyses not an input, and thus the claim of ‘circular reasoning’ is wrong. The aerosol forcing used in 20C3M (see http://www-pcmdi.llnl.gov/projects/cmip/ann_20c3m.php) climate model simulations was based on forward calculations using emission data (Boucher and Pham, 2002; references in Randall et al., 2007). Further, detection and attribution methods determine whether model-simulated temporal and spatial patterns of change (referred to as ‘fingerprints’) that are expected in response to changes in external forcing are present in observations. For example, the aerosol fingerprint shows a spatial and temporal pattern of near-surface temperature changes that varies between hemispheres and over time (see Hegerl et al., 2007 section The solar fingerprint shows a vertical pattern of free atmosphere temperature changes that has warming throughout the atmosphere unlike the observed pattern of warming in the troposphere and cooling in the stratosphere, and also has a distinct temporal pattern, particularly on longer timescales. These patterns make the response to solar and aerosol forcing distinguishable (with uncertainties) from that due to greenhouse gas forcing. The amplitude of those fingerprint patterns is estimated from observations. Therefore, attribution of the dominant role of greenhouse gases in the warming of the past half-century is not sensitive to the uncertainties in the magnitude of aerosol forcing, or of other forcings, such as solar forcing. If the observed response were (at a given significance level) consistent with a smaller aerosol signal, balanced by a smaller greenhouse gas signal than that used in the models, then the results from fingerprint studies would include these possibilities within their statistical uncertainty ranges. Thus, attribution studies sample the range of possible forcings and responses much more completely than climate models do (Kiehl, 2007). Also, the IPCC AR4 assessment carefully explores other possible explanations, such as solar forcing alone, and finds that ‘it is very likely that greenhouse gases caused more global warming over the last 50 years than changes in solar irradiance’, based on studies exploring a range of solar forcing estimates and using a range of data (section, Hegerl et al., 2007). Such studies also attribute the warming in the first half of the 20th century to a combination of external natural and anthropogenic forcing and internal climate variability (table 9.4) Thus, Curry and Webster misrepresent the role of forcing magnitude uncertainties in attribution, and do not appreciate the level of rigour with which physically plausible alternative explanations of the recent climate change are explored.

2) ‘.. no traceable account is given in the AR4 of how the likelihood assessment in the attribution statement was reached’: Expert open reviews are designed to ensure that the steps taken during the AR4 were clear to attribution experts. An explanation of how the assessment was obtained is given in the introduction to the chapter, and includes a description of how the overall expert assessment is based on technical results and an assessment of their robustness, downgraded to account for remaining uncertainties (section 9.1.2, second-to-last paragraph). The detailed assessment of the causes of a variety of observed climate changes, including the results from published studies, the remaining uncertainties, and the overall assessment is given in table 9.4, which extends over more than 3 printed pages. However, improving the communication of such material to the broader audience of scientists who are not directly involved in attribution studies is also an important goal, and this exchange shows this can be improved.

3) ‘The high likelihood of the imprecise ‘most’ seems rather meaningless’: We disagree. The likelihood describes the assessed probability that ‘most’, i.e. more than 50%, of the warming is due to the increase in greenhouse gases. This statement has a clear meaning and an associated uncertainty, although explicitly listing ‘>50%’ in the text to ensure that no misunderstandings are possible could be helpful in future work.

4) The authors claim that ‘Figure 9.4 of the IPCC AR4 shows that all models underestimate the amplitude of variability of periods of 40-70 years’. This is an incorrect conclusion because Curry and Webster do not appear to have considered the uncertainties that were presented in the chapter. The figure (figure 9.7, not figure 9.4 of the assessment, see figure) clearly shows that the simulated variability of annual global mean temperature on time scales of 40- 70 years is consistent with the variability estimated from observations, given uncertainty in spectral estimates. Detection and attribution methods account for the contribution by internal climate variability to observed climate changes. Since the estimates of climate variability that are used for this purpose are generally obtained from climate model data, the chapter also contains a detailed discussion of the reliability of climate model variability for detection and attribution. Section states that detection and attribution methods yield an estimate of the internally generated climate variability in observations and palaeoclimatic reconstructions (see section 9.3.4) that is not explained by forcing. This ‘residual’ is comparable to the variability generated by climate models, and the patterns of variability in models reproduce modes of climate variability that are observed (see chapter 8). The remaining uncertainty in our estimates of internal climate variability is discussed as one of the reasons the overall assessment has larger uncertainty than individual studies (see, e.g. table 9.4).”

Curry/Webster response
We would like to thank the authors of the Comment (Hegerl et al. 2011), all of whom played leadership roles in the IPCC AR4, for their interest in our paper (Curry and Webster 2011). The authors are correct that since the Third Assessment Report, the IPCC has placed a high priority on communicating their conclusions about uncertainty. Our paper raises the issue of how the IPCC nonetheless again, in the AR4, fell short in this priority again as well as in investigating and judging uncertainty. Hegerl et al. focus on the section in our paper on Uncertainty in attribution of climate change,” which addresses the IPCC AR4 conclusion regarding attribution: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
We are encouraged that Hegerl et al. acknowledge the importance of improving traceability—a recommendation made by the InterAcademy Council (IAC 2010) as well. We believe an independent person or group—and not just members of the small community of attribution experts—should be able to understand how the result came to be and to walk through the decision process and achieve the same result. The IPCC should consult with the larger scientific and engineering community experienced in traceability standards to determine what is meant by IPCC’s traceability guidelines, and what kind of traceability is actually suitable for the IPCC assessments. Beyond the quote we provided in our article, the IAC Review provides a starting point for a description of what is suitable: “… it is unclear whose judgments are reflected in the ratings that appear in the Fourth Assessment Report or how the judgments were determined. How exactly a consensus was reached regarding subjective probability distributions needs to be documented.”
Some fields (e.g. medical science, computer science, engineering) have stringent traceability requirements, particularly for products and processes that are mission critical or have life-and-death implications. We expect the level and type of traceability required of IPCC will be related to the complexity of the subject matter and the criticality of the final product. Increasing traceability in its assessment reports will enhance both accountability and openness of IPCC.
Hegerl et al. state: The remaining uncertainty in our estimates of internal climate variability is discussed as one of the reasons the overall assessment has larger uncertainty than individual studies.” Translating this uncertainty in internal climate variability (among the many other sources of uncertainty) into a “very likely” likelihood assessment is exactly what was not transparent or traceable in AR4 attribution statement. We most definitely “do not appreciate the level of rigor with which physically plausible non- greenhouse gas explanations of the recent climate change are explored,” for reasons that were presented in our paper. In our judgment, the types of analyses referred to and the design of the CMIP3 climate model experiments that contributed to the AR4 do not support a high level of confidence in the attribution. Hegerl et al. take issue with our statement “The high likelihood of the imprecise ‘most’ seems rather meaningless.”
Hegerl et al.’s proposal to add “>50%” to the attribution statement might have improved communication of uncertainty on this point. Nonetheless, this small change would still fall short of addressing the problems our article described (and quoted from assessment users) about the fundamental difference between 51% and 99% attribution.Hegerl et al. object to our statement in the original manuscript: “Figure 9.7 of the IPCC AR4 shows that all models underestimate the amplitude of variability of periods of 40-70 years,” on the basis that we do not consider the uncertainties presented in the chapter. Figure 9.7 is presented on a log-log scale, and the magnitudes of the uncertainties for both the model simulations and the observations are approximately a decade (a factor of 10).
Considering uncertainty, a more accurate statement of our contention would have been: The large uncertainties in both the observations and model simulations of the spectral amplitude of natural variability precludes a confident detection of anthropogenically forced climate change against the background of natural internal climate variability.Acknowledgements. Comments from the Denizens of Climate Etc. at judithcurry.com are greatly appreciated. Particular thanks to Steve Mosher, John Carpenter and Pekka Perila on traceability.
[ClimateGate CRU, approx. 2007] Emails
subject: Re: 5AR runs next iteration- reply by 26th
to: Karl Taylor <taylor13@llnl.gov>
Hi all,
about attribution in some house subcommittee, I was very happy to be able to resort to Tim’s argument that the model runs were older than the heat uptake data and therefore, there was no secret tuning in the 2001 ocean attribution
So using the 20th c for tuning is just doing what some people have long suspected us of doing…and what the nonpublished diagram from NCAR showing correlation between aerosol forcing and sensitivity also suggested. Slippery slope… I suspect Karl is right and our clout is not enough to prevent the modellers from doing this if they can. We do loose the ability, though, to use the tuning variable
for attribution studies.
Should we ask to admit in their submission what variables were considered when tuning, and if any climate change data were considered and at what temporal and spatial representation (global mean trend?), and advise that we will not be able to use those models for any future attribution diagrams? That would at least lay it in the open…
Karl Taylor wrote:
> Hi Peter and all,
> There will clearly be different perspectives on this.  A model
> developer will want to make use of all available observational
> information to help decide whether his model is realistic or not.
> We can envision two candidate models that appear equivalent in most
respects, but one fails to produce ENSO’s.  The developer would choose

> the one that simulated ENSO.
> Likewise, suppose two candidate models were identical in most
> respects, but one could accurately simulate the climate of the 20th
> century (when all forcings were included), whereas the second had a
> very low global sensitivity and produced too little warming.  The
> developer would again want to choose the model that reproduced the
> observed trends.  In fact this model would probably produce a better
> estimate when forced by future emissions scenarios too (because,
> presumably, its sensitivity is closer to the truth).
> It would be hard to argue that information about 20th century trends
> shouldn’t be used in model development.
> I agree that this may rule out attribution studies (following the
> established approaches), but wouldn’t we have to argue that
> attribution studies are more important that model projections to
> convince the groups not to consider trends in the model development
> cycle?
> cheers,
> Karl
>> Hello everybody,
>> change experiments should be run as part of the model development
>> process, ie whether model developers should test their model against
>> climate change as they are developing their model. I think it might be
>> worthwhile us developing and expressing a view on this as we don’t want
>> to risk getting into a position where attribution results in AR5 are
>> undermined by the development and model tuning procedure adopted by
>> modelling centres.
Also I don’t think you quite captured the point that another reason for

>> separating out the ghg response from the response to other forcings is
>> to aid understanding, as we are finding out in trying to understand the
>> precipitation response. I think that requesting ALL, GHG, and NAT
>> ensembles would be the basic set.
>> Best wishes,
>> Peter
>> On Fri, 2007-05-18 at 10:33 -0400, Gabi Hegerl wrote:
>>> Hi all.
>>>  From your comments, I assembled a word file with our suggestions on
>>> the 5AR run
>>> proposal, but I am not sure
>>> I caught it all completely. Also, I had a chat with Jerry yesterday,
>>> and he said getting

>>> suggestions of what should be stored will be useful at this point.
>>> My plan is to communicate this with Jerry when we are done with it,
>>> and then propose
>>> it at the WGCM meeting.
>>> I drew a strawman list of what I could think of in 3 minutes, and am
>>> asking you to
>>> add to it. Its all in track changes, so dont hesitate to go wild
>>> (but please keep in mind that
>>> we need to restrict data requests to something you think you will
>>> work with in the next
>>> years, since it is a fair amount of effort from the modelling
>>> centres to haul the data over
>>> etc, and the more we request, the more likely it is that only few
>>> ensemble members etc
>>> get sent…)
>>> Karl, I am cc;ing you since your perspective would be useful

JC comment:  To me, the emails argue that there is insufficient traceability of the CMIP model simulations for the the IPCC authors to conduct a confident attribution assessment and at least some of the CMIP3 20th century simulations are not suitable for attribution studies.   The Uncertainty Monster rests its case (thank you, hacker/whistleblower).”
Ed. note: Most bolding was added by me. The “JC comment” appears as you see  near the end of Dr. Curry’s post. I placed it additionally at the top of this post. This article by Dr. Curry came to my attention in the course of doing research on Hegerl. UN IPCC author Hegerl says, “IPCC has placed high priority on communicating uncertainty,” but this isn’t what US taxpayers are told. We’re told human CO2 is a life and death matter, killing every second, settled science, that Americans are the biggest murderers, and therefore must write a check for millions each day in perpetuity. This was declared “settled science” in the US by George Bush #1 when he mandated CO2 danger and action in US law in 1990, commanded 13 federal agencies act to solve global change, and that US taxpayers write checks for climate research all over the world for endless global “changes.” CO2 is mentioned near the end in section 204. The CO2 screaming going on today is an act. Most US politicians know they’re just trying to steal a lot more money from defenseless chump taxpayers. And that doesn’t being to talk about billions being stolen yearly from Americans via regulation. The vast majority of laws impacting the US economy today are not decided by congress but by regulatory agencies. Keeping most Americans unaware of all this has worked out great. The mafia wish they’d thought of a cash business this good. Politicians and profiteers get to demonize America, say we’re selfish because we don’t pay enough and we need to sign a “treaty.” They don’t mention that even if CO2 terror does exist, US CO2 has plunged and is no longer a factor in the global number. 90% of US politicians should be in jail. Americans are in a worse situation today vs their “rulers” than they were vs British occupying troops in 1776. The Bush family and Fox News are the country’s 2 biggest problems. They’ve merged with the radical left to combine forces against what they view as the biggest enemy: ordinary Americans, the ‘Silent Majority’ types.
$18.5 billion worth of climate regulations were issued in 2012 alone. No need for congress. “The vast majority of “laws” governing the United States are not passed by Congress but are issued as regulations.”

In 2011, the US Congress passed a total of 81 new “laws” while government agencies issued 3,807 new regulations.” Congress is irrelevant.
At the outset of 2013 a substantial portion of America finds itself un-represented, while Republican leaders increasingly represent only themselves. By the law of supply and demand, millions of Americans, (arguably a majority) cannot remain without representation. Increasingly the top people in government, corporations, and the media collude and demand submission as did the royal courts of old. This marks these political orphans as a “country class.””…