Covid vs. Climate Modeling: Cloudy With a Chance of Politics By Eric Felten

https://www.realclearinvestigations.com/articles/2020/06/04/covid_vs_climate_modeling_cloudy_with_a_chance_of_politics_123891.html

COVID-19 has proved to be a crisis not only for public health but for public policy. As credentialed experts, media commentators, and elected officials have insisted that ordinary men and women heed “the science,” the statistical models cited by scientists to predict the spread of contagion and justify the lockdown of the national economy have proven to be far off-base.

Gov. Andrew Cuomo of New York complained this week about the “guessing business” experts had presented to him dressed up as scientific fact: “All the early national experts [said]: Here’s my projection model. Here’s my projection model,” Cuomo said. “They were all wrong. They were all wrong.”

Neil Ferguson of Imperial College London, whose computer modeling of the coronavirus predicted up to 2.2 million U.S. deaths. He has since resigned.

A computer model produced by statisticians at Imperial College London had an outsized effect on government policy, predicting up to 2.2 million American deaths from the new coronavirus and as many as 9.6 million people requiring hospitalization. Instead, emergency rooms and hospital beds in all but the few hardest hit cities remained empty; rather than being overwhelmed by cases, many doctors and nurses found themselves out of work.

As the staggering social and economic costs of shutdown have become painfully clear, the failure of the models to accurately anticipate what would happen is raising questions about their use to justify life-altering public policies.

If computer models projecting the near-term future of an epidemic were so wrong, what does that mean for the far more complicated computer models predicting the far-off future of the entire planet?

As Texas Sen. John Cornyn tweeted: “After #COVID-19 crisis passes, could we have a good faith discussion about the uses and abuses of ‘modeling’ to predict the future? Everything from public health, to economic to climate predictions. It isn’t the scientific method, folks.”

Gov. Andrew Cuomo of New York on computer models: “They were all wrong. They were all wrong.”

Scientific American sought to dismiss such concerns in an April 15 article headlined “Climate Science Deniers Turn to Attacking Coronavirus Models.” While not exactly defending the methodology used in the models, the article said they were wrong “because millions of Americans responded to pleas for social distancing.” It then invoked newer models that would also prove to be wrong – forecasting only 60,000 U.S. deaths; there are now more than 107,000 – before defending the original alarmist numbers with what almost sounds like an argument for the politicization of science from the coronavirus to climate change: “Health experts say the models worked the way they were supposed to — by providing a glimpse into a dire future that was partially averted because of collective action.”

Building complex models is both a science and an art. It requires vast amounts of data representing a range of factors that might influence a particular question. To predict the spread of COVID-19, for example, researchers need reliable data on a wide range of factors including how infectious the virus is, how it is transmitted, how much of the population is susceptible to the worst outcomes. They have to assign a weight to each factor in the model, and then crunch the numbers with powerful computers to produce probabilities of possible outcomes.

Models may be helpful in thinking about the results of various policies. But they are easily oversold as providing answers with mathematical certainty. Writing in the BMJ (formerly the British Medical Journal), Devi Sridhar, a professor of public health at Edinburgh University, and Maimuna Majumder, a computational epidemiologist affiliated with Harvard Medical School, chide the “modeling community” for failing to make the limitations of models clear. Sridhar and Majumder call for transparency about the assumptions modelers make and clarity about how much the predictions shift when even small changes are made to the assumptions. Most of all, they urge humility about just how uncertain such models are.

Dr. Anthony Fauci, with President Trump: “They don’t tell you anything. You can’t really rely upon models.”

In an article in the Annals of Internal Medicine – “Caution Warranted: Using the Institute for Health Metrics and Evaluation Model for Predicting the Course of the COVID-19 Pandemic” – three prominent British and American researchers warned against thinking computer calculations could replace sound data and independent judgment.

“This appearance of certainty is seductive,” they wrote. That false sense of certainty is particularly seductive “when the world is desperate to know what lies ahead.”

Their critique was withering. The flaws they found in the model from the Institute for Health Metrics and Evaluation at the University of Washington included several dubious assumptions: that social distancing would play out the same way everywhere, for one, and that curves could be expected to follow the same general patterns from country to country. Evidence of how the disease had spread – the essential data – was sketchy, plagued by “inconsistent and poor reporting.” When the projections were revised, the magnitude of the changes revealed “substantial volatility.”

Volatile predictions are inherently uncertain. But model-makers have presented their work with the impression of specificity. On March 27, for example, IHME predicted the number of COVID-19 deaths in New York would very likely be between 5,167 and 26,444. A rounded number – say, 10,000 – would have conveyed the ballpark nature of their guesstimate. Instead, the number the University of Washington group published was the very exact 10,243. As one statistician told RealClearInvestigations, the IHME projections suffer from the “fallacy of misplaced concreteness.”

The Imperial College London model also suffered from uncertainty over what factors cause the disease to spread. Consider musical concerts. As states, counties and cities in the U.S. attempt to reopen gradually, the last on the list to be liberated are likely to be live performances that entail “mass gatherings.” And yet, go back to the Imperial College London study – from the “response team” that did so much to stampede the U.K. into lockdown – and one finds this assessment of the danger of crowds: “Stopping mass gatherings is predicted to have relatively little impact because the contact-time at such events is relatively small compared to the time spent at home, in schools or workplaces and in other community locations such as bars and restaurants.”

Respected scientists questioned not only the epidemiologists’ efforts, but the very value of such models. “I’ve spent a lot of time on the models,” Dr. Anthony Fauci reportedly told his colleagues on the White House’s pandemic task force. “They don’t tell you anything. You can’t really rely upon models.”

Mike Hulme of the University of Cambridge: Computer models “appear to offer authoritative and quantified predictions of the future. This is as true for climate change as it is for a pandemic.”

And yet we do.

We are impressed with models in part because of their intellectual provenance: They are “created by some of the cleverest people and often rely on some of the most advanced monitoring or simulation technologies available to us,” according to Mike Hulme, a professor at the University of Cambridge and editor  of last year’s  “Contemporary Climate Change Debates: A Student Primer.” If one is in need of an oracle, models “appear to offer authoritative and quantified predictions of the future,” he says. “This is as true for climate change as it is for a pandemic.”

Climate modeling and virus transmission modeling have certain similarities, says Hulme. “In both cases models are alluring, claiming to offer a glimpse of the future denied to mere mortals,” he told RealClearInvestigations. “Politicians easily get dazzled by them. People easily confuse precision — models are good at that! — with accuracy — models are rarely accurate.” Which is why Hulme praises as wise the decision-maker who isn’t “sucked into the gravitational force fields of models.”

There are also differences: Climate models have a leg up on the COVID models if only because they’ve been tested for 20 to 30 years, and revised and adjusted accordingly, says Hulme. The COVID modelers have been working with inconsistent, “gappy” data and untried assumptions. And yet even with the decades of effort that has been put into climate, modelers struggle to predict phenomena such as regional rainfall.

The key message,” Hulme tells RCI, is not to “mistake model-land for the real world. They are two separate places.”  All models are wrong, he says, but some are useful. “Models are far better as tools to help us think with than they are as truth oracles.  We must not think that models have some privileged access to ‘the future.’ That would likely lead to some very poor decisions.”

A core challenge for models is the sheer number of variables that must be taken into account. With climate, that includes the amounts of greenhouse gases such carbon dioxide, nitrous oxide, and methane, but also soot and sulfur aerosols and the activity of the sun. And then there are interactions to be accounted for. Not only do aerosols themselves affect how much sunlight reaches Earth, they affect the formation of clouds, which in turn reflect sunlight out of the atmosphere.

Judith Curry: With every climate submodel added, the possibility of error compounds, multiplying the chance that the main model veers off target.

If it is hard to model a single phenomenon, it is exponentially more difficult when a given model contains submodels, each with its own uncertainties. “Each time you add a new submodel, you are adding new degrees of freedom to the system with new feedbacks,” says Judith Curry, former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. “Then when you couple the new submodel to the larger model, you add additional degrees of freedom to each variable that the new submodel connects with.” In other words, with every submodel added the possibility of error compounds, multiplying the chance that the main model veers off target. “This issue,” Curry says, “remains at the heart of many of the problems and uncertainties in global climate models.”

The accuracy of climate model predictions also depends on assumptions made about future human behavior. If a model is alarming, it’s worth checking whether it has built in an unlikely eventuality, such as that by the end of the century industry will be burning five times the coal it is now. But predictions of human behavior are inherently uncertain. Whether disease or climate, a modeler has to anticipate the social issues that come into play. Did sudden mass joblessness and government-enforced social isolation help create tinderbox conditions leading to nationwide rioting, looting and arson? Crowds in the streets don’t maintain social distancing, which means that mass protests could affect the number of people who become infected by COVID-19.

There is no one climate model. The range of models used by the United Nations Intergovernmental Panel on Climate Change rely on different assumptions but have been in the ballpark of observed warming. But there is regional variation in warming and many models have predicted rising temperatures in Antarctic waters and loss of sea ice there.

Jagadish Shukla: Epidemiological models “are empirical models driven by incomplete data; climate models are based on fundamental laws of physics and thermodynamics.”

Some climate scientists deny that COVID-19 models and climate models are anything alike. They stand by the rigor of climate simulations while agreeing the disease projections are flawed: “There are fundamental differences between epidemiological models and climate models,” George Mason University climatology professor Jagadish Shukla told RealClearInvestigations. “The former are empirical models driven by incomplete data; climate models are based on fundamental laws of physics and thermodynamics.”

But many researchers vigorously defend the coronavirus models.  “Right answers are not what epidemiological models are for,” Zeynep Tufekci wrote in The Atlantic. “When an epidemiological model is believed and acted on, it can look like it was false.” What matters in this view is that a model spur action to change the outcome, not that it does anything so mundane as describe the real world.

Neither epidemic nor climate models attempt merely to predict what will happen. Instead, they set out to project what will happen if people do or don’t change their behavior  in response to the models. Modelers aren’t exactly incentivized to be modest about the worst-case scenarios. As one accomplished academic statistician told RealClearInvestigations, “Part of the process is to scare people to get them to take things seriously.”

Physicist Lenny Smith, professor of statistics at the London School of Economics, says that many climate models operate on time frames beyond the lifespan of the modelers. Often, he says, “we can’t see the outcome being modeled for 150 years.” By contrast, some of the COVID-19 models are being used to predict what will happen the next day. A group of Australian and American data scientists have been comparing the real coronavirus data of a given day against the IHME predictions made the day before. Put to the test, the model has proved to have little predictive value.

“COVID models are more easily evaluated, since they are making short-term predictions,” says Judith Curry. “Climate models are making predictions for decades into the future,” she says. “By the time the climate change is actually realized, there will have been several generations of new climate models.”

Sally Cripps, statistician: “The data science community in particular needs a little more humility. It needs to hose down claims about Big Data being a crystal ball.”

In other words, the models get adjusted along the way, creating an appearance of accuracy. It’s not done chaotically, as the epidemic model revisions have been, but rather in a systematic way that has been kept somewhat undercover. The climate model revisions are called “tuning,” and were discussed in “The Art and Science of Climate Model Tuning” by Frédéric Hourdin and a dozen other climatologists in the Bulletin of the American Meteorological Society in 2017.

“[T]uning is often seen as an unavoidable but dirty part of climate modeling,” they write, “an act of tinkering that does not merit recording in the scientific literature.” The tinkering consists of “adjusting the values” of the submodels after the fact, bringing “the solution as a whole into line with aspects of the observed climate.” The tinkering is not advertised, the scientists admit, because of “concern that explaining that models are tuned, may strengthen the arguments of those claiming to question the validity of climate change projections.”

To make those adjustments, climate modelers follow theories; some use observations; some just make a “back-of-the-envelope estimate.” But it isn’t done randomly: Hourdin et al. write, “[S]ome models are explicitly, or implicitly, tuned to better match the 20th century warming.”

Whether it’s epidemiology, climate, or economics, says Sally Cripps of the University of Sydney, modelers need to “acknowledge and explain the uncertainty” in their enterprise. “The data science community in particular needs a little more humility,” she says. “It needs to hose down claims about Big Data being a crystal ball, and instead use the data to understand what we don’t know. That is the way forward.”

Comments are closed.