Climate Modelling: Rubbish In, More Rubbish Out: Michael Kile

https://quadrant.org.au/opinion/doomed-planet/2023/05/climate-modelling-rubbish-in-rubbish-out/

“The rest is history: the history of how dodgy “post-normal” science joined up with a pseudo-scientific “precautionary principle” to corrupt the UN, IPCC and WMO and, despite the “vast uncertainties”, ultimately created the NetZero decarbonising monster that is disrupting countries — and energy markets — everywhere on the bogus pretext of “fighting climate change”.

Complexity and perplexity go together like a horse and carriage, or in this case, the climate and a modeller. When probability claims masquerade as genuine predictions and international agencies and governments promote alarmism at every opportunity and confirmation bias distorts the search for truth, the outcome is today’s witch’s brew of “climate change” hyperbole and “save-the-planet” activism that is now disrupting every aspect of life.

Consider the World Meteorological Organization’s press release of May 17, 2023: Global temperatures set to reach new records in the next five years. It warned that:

Global temperatures are likely to surge to record levels in the next five years, fuelled by heat-trapping greenhouse gases and a naturally occurring El Niño event, according to a new update issued by the World Meteorological Organization (WMO). 

There is a 66% likelihood that the annual average near-surface global temperature between 2023 and 2027 will be more than 1.5°C above pre-industrial levels for at least one year.  There is a 98% likelihood that at least one of the next five years, and the five-year period as a whole, will be the warmest on record.

A temporary reprieve from a french-fry fate is possible. But hold the champagne. The world is still going to exceed 1.5°C “with increasing frequency”. Unless we prostrate ourselves with more fervour at the altar of NetZero it could become permanent. Whatever happens, like Rick and Ilsa in Casablanca, we will always have Paris.

WMO’s Secretary-General Professor Petteri Taalas:

This report does not mean that we will permanently exceed the 1.5°C level specified in the Paris Agreement, which refers to long-term warming over many years. However, WMO is sounding the alarm that we will breach the 1.5°C level on a temporary basis with increasing frequency.

A warming El Niño is “expected to develop in the coming months”, he continues. So, dear reader, mark your calendar. It will “combine with human-induced climate change” and “push global temperatures into uncharted territory”. This will have “far-reaching repercussions for health, food security, water management and the environment. We need to be prepared.”

Have modellers so mastered the arcane art of calculating probabilities they can now conjure up such precise short-term predictions? Apparently so, as the WMO press release went on to claim:

There is only a 32% chance that the five-year mean will exceed the 1.5°C threshold, according to the Global Annual to Decadal Climate Update produced by the United Kingdom’s Met Office, the WMO lead centre for such predictions.

The chance of temporarily exceeding 1.5°C has risen steadily since 2015, when it was close to zero.  For the years between 2017 and 2021, there was a 10% chance of exceedance.

Hold it right there. There apparently is “a 98% likelihood that at least one of the next five years, and the five-year period as a whole, will be the warmest on record.” Yet there is only a 32% chance the global temperature over this period will exceed the 1.5°C threshold. The probability of a two-sided coin landing on heads is 50%. Interesting.

A paper published three years ago concluded:

For the period 2017 to 2021 we predict a 38% and 10% chance, respectively, of monthly or yearly temperatures exceeding 1.5 °C, with virtually no chance of the 5-year mean being above the threshold.

We cannot directly assess the reliability of forecasts of the probability of exceeding 1.5 °C because this event has not yet occurred in the observations.” Predicted Chance That Global Warming Will Temporarily Exceed 1.5 °C  — Geophysical Research Letters, October 12, 2018

The authors, nevertheless, would update their forecasts “every year to provide policy makers with advanced warning of the evolving probability and duration of future warming events.” What if the planet does exceed an arbitrary number selected by UN agencies in Paris? Would the world be more afraid – or resigned – than it is today? What if the global “climate”, whatever that is, is beyond human control?  Surely that’s the biggest elephant in the greenhouse. The MSM, predictably, trumpeted the WMO’s alarmism. There was, with the ABC dutifully if selectively repeating the claim of a “98 per cent chance one of the next five years would be the hottest ever.”

As for the timing, cometh the hour, cometh the prediction. The WMO Global Annual to Decadal Update 2023-2027 was released just five days before the 19th session of the World Meteorological Congress began in Geneva this week. (See WMO events). One of WMO’s top priorities is implementing the UN Early Warnings for All Initiative. On World Meteorological Day last March, the UN Secretary-General announced “a new call to action to ensure every person on Earth is protected by MHEWS (Multi-hazard early warning systems) within five years: the Early Warning Systems Initiative (EWS4ALL)”.

As mentioned above, the UK Met Office acts as the WMO Lead Centre for Annual to Decadal Climate Prediction. It now has to dance the MHEWS tango, as do 145 ensemble members from 11 institutes engaged in this global exercise. Too many cooks tend to spoil the broth. Perish the thought, but perhaps too many modellers are trying to predict the unpredictable: natural and climatic variability. Not so, says the WMO: “Retrospective forecasts, or hindcasts, covering the period 1960-2018 are used to estimate forecast skill. Confidence in forecasts of global mean temperature is high since hindcasts show very high skill in all measures.” Accurately forecasting the future, however, is surely a bigger challenge than hindcasting the past.

The new update includes a section evaluating forecasts for the previous five years (page 16). A mixed bag of outcomes indeed. The “ensemble” models, for example, did not “capture” the “cold anomalies in Antarctica and eastern Asia”. And so on and so forth. As for complexity, there’s never been any shortage of it in the climate change space. According to UK Met Office:

The evolution of climate in the near term, out to a decade or two ahead, is the combination of natural climate variability and human-forced climate change. Changes in natural variability are large enough from one decade to the next to temporarily exacerbate or counter underlying anthropogenic trends [presumably assumed in model simulations].

How the two phenomena are quantified remains a mystery, at least to me. Even after reading the World Climate Research Programme on the Grand Challenges of Near-Term Climate Prediction I am, alas, none the wiser. For some reason, they ended last year, possibly because too many experts had COVID or apocalypse fatigue syndrome. Hardly surprising, given this ambitious “concept note’”.

One aspect most modellers seem to agree on is that “decadal predictions need to take into account both initial conditions of the climate system as well as the evolution of long-term forcings. (See Fig. 2 (Box 11.1) in AR5-WG1.) The Barcelona Supercomputing Center describes their dilemma here and here:

Certain limitations, such as imperfect parameterizations and inaccurate initial conditions, introduce biases in the climate models, i.e. cause them to have differences with the observations. All models exhibit to some extent biases.

Furthermore:

at sub-seasonal to interannual time scales, climate predictability is thought to arise significantly from the knowledge of initial conditions. Initializing climate models with observationally-based estimates is a very challenging task scientifically, but also technically.

Accurate near-term predictions from climate models rely, among others, on a realistic specification of initial conditions. The problem is simple to state, but difficult to address for two reasons: (1) the observational coverage is sparse, (2) climate models “live” in their preferred state.

Such perplexity is not new. A lot of folk have had it, including the late Stephen Schneider (1945-2010). A distinguished environmental researcher at Stanford University’s Woods Institute, he was an author for four early IPCC assessment reports, a “core member” for two of them. When the UK Royal Society published a commemorative volume of essays in 2010, Seeing Further – The Story of Science and The Royal Society, it included this one by Schneider: “Confidence, Consensus and the Uncertainty Cops: Tackling Risk Management in Climate Change.” At the time, he was perplexed by the “significant uncertainties” that “bedevil components of the science”, “plague projections of climate change and its consequences”, and challenge the traditional scientific method of directly testing hypotheses (‘normal’ science). Schneider’s solution: to change ‘the culture of science’ by developing a language that would convey the gravity of the situation “properly” to policy makers.

As climate uncertainty was (and for me still is) so intractable — and incomprehensible to the public — Schneider introduced the rhetoric of risk management – “framing a judgement about acceptable and unacceptable risks” – and pseudo-probability. While he claimed he was “uncomfortable” with this “value judgement” approach, he was even “more uncomfortable ignoring the problems altogether because they don’t fit neatly into our paradigm of ‘objective’ falsifiable research based on already known empirical data.”

He proposed a new subjective paradigm of “surprises’ in global climate scenarios, one with “perhaps extreme outcomes or tipping points which lead to unusually rapid changes of state”; while admitting that

by definition, very little in climate science is more uncertain than the possibility of ‘surprises’.

According to Schneider,

despite the worry that discussions of surprises and non-linearities could be taken out of context by extreme elements in the press and NGOs [but apparently not by the IPCC], we were able to include a small section on the need for both more formal and subjective treatments of uncertainties and outright surprises in the IPCC Second Assessment Report in 1995.

As a result the very last sentence of the IPCC Working Group 1 1995 Summary for Policy Makers addresses the abrupt non-linearity issue. This made much more in-depth assessment in subsequent IPCC reports possible, simply by noting [that is assuming, not proving] that: ‘When rapidly forced, non-linear systems are especially subject to unexpected behaviour.

This was a pivotal moment in the history of climate alarmism. Schneider had smuggled a Trojan horse into the IPCC, with a contrived “language for risk” inside. It was a language derived from his personal (and the IPCC’s) “value frame” and was adopted in subsequent reports. They now had, he wrote triumphantly, “licence to pursue risk assessment of uncertain probability but high consequence possibilities in more depth; but how should we go about it?” How, indeed?

It took a long time for him to “negotiate” agreement with climate scientists on precise “numbers and words” in the Third Assessment Report cycle.

There were some people who still felt they could not apply a quantitative scale to issues that were too speculative or ‘too subjective’ for real scientists to indulge in ‘speculating on probabilities not directly measured’. One critic said:Assigning confidence by group discussion, even if informed by the available evidence, was like doing seat-of-the-pants statistics over a good beer.’

Schneider’s Royal Society essay nevertheless concluded: “Despite the large uncertainties in many parts of the climate science and policy assessments to date, uncertainty is no longer a responsible justification for delay.”

How can one argue the more uncertain a phenomenon, the greater the risk to us and the planet? Yet they did and are still doing it today.

That said, at least Schneider was sceptical about modelling”

There are many scientists who dispute that it is only humans controlling the climate thermostat,” he wrote. “Heat exchanges from the tropics to the poles, ocean currents of countless durations and size, changing amounts of heat from the sun, all operate in a chaotic non-linear manner to make climate modelling a largely fruitless, if politically necessary, activity.

As for his “own personal value position”, Schneider stated it emphatically in this 2003 paper: What is the Probability of “Dangerous” Climate Change?

Given the vast uncertainties in both climate science and impacts estimations, we should slow down the rate at which we disturb the climate system — that is, reduce the likelihood of “imaginable conditions for surprise”. This can both buy time to understand better what may happen — a process that will take many more decades of research, at least — and to develop lower-cost decarbonization options so that the costs of mitigation can be reduced well below those that would occur if there were no policies in place to provide incentives to reduce emissions and invent cleaner alternatives.  Abating the pressure on the climate system, developing alternative energy systems, and otherwise reducing our overconsumption (see Arrow et al., 2003) are the only clear “insurance policy” we have against a number of potentially dangerous irreversibilities and abrupt nonlinear events.

To deal with such questions, the policy community needs to understand both the potential for surprises and how difficult it is for integrated assessment models (IAMs) to credibly evaluate the probabilities of currently imaginable “surprises,” let alone those not currently envisioned.

As for “abrupt nonlinear events” that would “qualify as dangerous anthropogenic interference with the climate system,” use your imagination. It’s easy to find an “extreme weather event” in natural variability. They happen somewhere in the world almost every day. Bamboozling folk with data or simulations that may, or may not, describe reality can be fun too. Astrologers, readers of entrails and other prognosticators made a lucrative living from it, even when their so-called facts and predictions were “value-laden”, and riddled with confirmation bias.

Schneider again:

Whether a few generations of people demanding higher material standards of living and using the atmosphere as an unpriced sewer to more rapidly achieve such growth–oriented goals is “ethical” is a value-laden debate that will no doubt heat up as greenhouse gas builds up…and references to the “precautionary principle” will undoubtedly mark this debate.

The rest is history: the history of how dodgy “post-normal” science joined up with a pseudo-scientific “precautionary principle” to corrupt the UN, IPCC and WMO and, despite the “vast uncertainties”, ultimately created the NetZero decarbonising monster that is disrupting countries — and energy markets — everywhere on the bogus pretext of “fighting climate change”.

Comments are closed.