RealClimate: Watching the detections

The detection and the attribution of climate change are based on fundamentally different frameworks and shouldn’t be conflated.

We read about and use the phrase ‘detection and attribution’ of climate change so often that it seems like it’s just one word ‘detectionandattribution’ and that might lead some to think that it is just one concept. But it’s not.

Formally, the IPCC definitions are relatively clear:

Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense, without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small, for example, <10%.

Attribution is defined as the process of evaluating the relative contributions of multiple causal factors to a change or event with a formal assessment of confidence.

IPCC SR1.5 Glossary

Detection is therefore based on a formal null hypothesis test (can we reject the null hypothesis that climate is stationary?), while attribution is a Bayesian statement about how much change is expected from what cause. They don’t formally have that much to do with each other!

Note too that detection requires knowledge only of the expected climate in the absence of the effect you are trying to detect, while attribution also needs knowledge of the expected climate with the effect included.

Historically, these steps were performed sequentially: first a change was statistically detected, and then, once there was a clear change, an attribution study was performed to see why. This makes sense if you want to avoid chasing a lot of false positives (i.e. finding non-random causes for things that turn out to be random noise). But there is a fundamental problem here to which I’ll return below. But first, an easy example:

Global mean surface temperatures

With respect to the global mean temperature, the detection of climate change was quite fraught, going from Hansen’s 1988 declaration, the backlash, and the tentative consensus that emerged after the 1995 Second Assessment Report (itself subject to a barrage of rejectionism focused on exactly this point). However, in hindsight, we can conclude that the global mean surface temperature signal came out of the ‘noise’ of natural climate variability (i.e. it was detected) sometime in the early 1980s. An example of how to show this uses a specific kind of climate model simulation (and lots of climate models) and the observational data. We look at a set of simulations (an ensemble) run with all the natural drivers (the sun, volcanoes, orbital changes etc.). The difference between this ensemble (which incorporates uncertainty in internal variability and some structural issues) and the observational data has grown over time, and the point at which the observations depart from the ensemble spread (at some level of confidence) is the point at which a change can be detected relative to an unperturbed climate. Compare the black line with the green uncertainty band in the figure below:

Figure SPM2b from IPCC AR6 showing the change in global surface temperature from 1850 to 2020 in observations and two sets of model simulations, one with all forcings which matches the observations, and one set with only natural forcings that doesn't.
All-forcing simulations, observations vs. natural simulations of the change in global mean SAT (IPCC AR6, SPM)

A second set of climate model simulations can be run with all of the factors we think of as important – the natural drivers of course, but also the anthropogenic changes (greenhouse gas changes, air pollution, ozone depletion, irrigation, land use change etc.). Like the observations, this ensemble starts to clearly diverge from the natural-drivers-only ensemble in the 1980s, and does a reasonable job of tracking the observations subsequently, suggesting that it is a more accurate representation of the real world than the original null hypothesis.

Note, however, that there are clear differences in the mean temperatures between the two ensembles starting in around 1920. With enough simulations, this difference is significant, even if it is small compared to internal variability. Thus we can statistically attribute trends in SAT to anthropogenic forcings for some 60 years before the anthropogenic effect was officially detected!

Another way of stating this is that, by the time a slowly growing signal is loud enough to be heard, it has been contributing to the noise for a while already!

Patrick Brown had a nice animation of this point a couple of years ago.

[As an aside, this features in one of those sleights-of-hand folks like Koonin like to play, where they pretend that anthropogenic greenhouse gases had no impact until the signal was large enough to detect.]

To summarise, uncertainty in detection depends on the internal variability (the ‘noise’) compared to the strength of the signal (the larger that is, the later the detection will be), whereas the uncertainty in attribution depends on the structural uncertainty in the models. The internal variability, which plagues detection, can always (theoretically) be averaged away in an attribution if the models are run enough times (or for enough time). Even small impacts can be attributed in this way in a statistically significant sense, even if they might not be practically significant.

Extreme events

As we’ve discussed many times previously (e.g. Going to Extremes, Extreme Metrics, Common fallacies in attributing extremes, extremes in AR6 etc.), we both expect, and have increasingly found, growing influences of anthropogenic change on many kinds of extremes – heat waves, intense rainfall, drought intensity etc. (CarbonBrief keeps great track of these studies). However, because of the rarity and uniqueness of particular extremes, it can be very hard to see trends – for instance, the UK had it’s first 40ºC day recently – and there is no time-series trend if you only have a single (unprecedented) point!

Trends can be seen if lots of events can be collated – for instance, in rainfall extremes or heat waves – over large areas (as discussed in Chp. 11 in AR6). For instance, if we aggregate the area covered by 2 and 3 sigma summer temperature extremes, you see a clear trend where a signal can be detected (after about the year 2000):

Land area affected by NH summer heat extremes at the 2 and 3 sigma level.
From AR6 Chapter 11, Box 11.4

However, most of the other claims related to extremes are attributional, and not detections in the sense defined above. This should not however be shocking since it was clear in the surface temperature example that attributional statements are possible (and correct) long before a trend has come out of the noise.

There is one other point worth mentioning. Taken alone, an extreme defined by a record exceedance (the rainfall during Hurricane Harvey, the temperature in the recent UK heatwave, etc.) by definition doesn’t give a trend. However, if you lower the threshold, more events will seen, and if there is a trend in the underlying causes, at some threshold a trend will be clear. For instance, for the number of days with UK peak temperatures above 39ºC, or 38ºC or 37ºC etc., there aren’t perhaps enough events to see a trend, but eventually there will be a threshold somewhere between the mean temperature (which we know is rising) and the record where the data will be sufficient for a confident statement relating to trends to be made.

This state of affairs can (of course) be used to mislead. For instance, readers will be aware that trends in local variables or short time frames are noisier than the global or regional long-term means, and there will always be someone somewhere who tries to rebut the clear global trends with a cherry-picked short-term or local data-series (“You say the globe is warming? Well what about a single month’s trend at this island in Japan that you’d never heard of until just now?” Real example!). For extremes then, it’s a common pattern to attempt to rebut a claim about attribution related to specific event, with a claim that the local trend of similar events is not significant. As in the previous example, it’s not even wrong. Both of these things can be true – the intensity of an event can have been ‘juiced’ by anthropogenic climate change, and these kinds of events are rare enough locally not to see (as yet) a statistically significant trend. The latter does not contradict the former.

[One further aside, many claims about trends use a simple linear regression. However, uncertainty estimates from standard linear regression requires the residuals from the trend are Gaussian. This is rarely the case for time-series of episodic extremes, even if they come from the tails of a standard Gaussian distribution, and so the stated uncertainties on such calculated trends are almost always wrong – if they given at all. You are better off using some form of Poisson regression within a generalized linear model – easy in R, perhaps not so straighforward in Excel?]

Extreme impacts

How then should we think about the impacts, particularly going into the future? How can policy-makers make sense of clear attributions, global or regional detections, but statistically insignificant (so far) detections at the local scale? Assuming that policy-makers are interested in what is likely to happen (under various scenarios and with appropriate uncertainties), it’s should be clear that the attributional framework is more meaningful and that the timing of the detection of local trends is a mathematical sideshow.

But, but, but…

Hold on, some might argue, surely the detection step is necessary before the attribution makes sense? Isn’t the whole point about detection is so that you don’t spend unnecessary effort trying to attribute what might just be noise?

Let’s break down this argument though. If you know nothing about a system, other than a single historical time-series from which you can extract the underlying variability, you can assess the probability that a trend as large as that observed might happen by chance (under various assumptions). But now someone tells you that you have clear evidence that the process from which this data was drawn comes from a distribution that is definitely changing. That changes the calculation (in Bayesian terms, it changes the prior). The point is that your time-series of climate extremes is not isolated from the rest of the physical system in which its embedded. If there is a clear trend in temperatures above 36ºC, and also temperatures above 37ºC, but the data is too sparse to independently conclude the same for temperatures above 38ºC or 39ºC, this should still all inform your opinion about the growing chances of temperatures over 40ºC – even if that has only happened once! It is the attributional calculation gives you exactly what you need – an estimate of the increase in probability that this extreme will be reached, not the detection – that’s just candy.

Ok, that may be so, but the changes in the odds of an event derived from the attribution comes from a model, no? How reliable is that? This is actually a fair question. How robust changes in fractional attributions (or return times) are is an open question. It is hard to evaluate the tail of a distribution in a model with limited observational data. But when you get similar results with multiple and diverse models, applied to more and more examples across the globe, that helps build credibility. Longer time series and (unfortunately) growing signals will also reduce the uncertainty – as will better aggregations across regions for similar extremes – but conceivably we won’t know for a while if the changes of a rare event have gone up 5-fold or 10-fold.

That the odds have increased (for heatwaves, intense rain, drought intensity etc.) is, however, no longer in doubt.




Source link

About KIRANMOH94

Check Also

Fake certificate of Maharaja’s College: Non-bailable charges slapped on teacher

Ernakulam: The Ernakulam central police here on Wednesday slapped non-bailable charges against former SFI leader …

Leave a Reply

Your email address will not be published. Required fields are marked *