The Greatest Scientific Fraud Of All Time — Part XXIV Francis Menton

https://www.manhattancontrarian.com/blog/2019-8-14-the-greatest-scientific-fraud-of-all-time-part-xxiv

The Greatest Scientific Fraud Of All Time is the fraud committed by the keepers of official world temperature records, by which they intentionally adjust early year temperature records downward in order to support assertions that dangerous human-caused global warming is occurring and that the most recent year or month is the “hottest ever.” The assertions of dangerous human-caused global warming then form the necessary predicate for tens of billions of dollars of annual spending going to academic institutions; to the “climate science” industry; to wind, solar and other alternative energy projects; to electric cars; and on and on. In terms of real resources diverted from productive to unproductive activities based on falsehoods, this fraud dwarfs any other scientific fraud ever conceived in human history.

This is Part XXIV of my series on this topic. To read Parts I through XXIII, go to this link.

The previous posts in this series have mostly focused on particular weather stations, comparing the currently-reported temperature history for each station with previously-reported data. For example, the very first post in this series, from July 2013, looked at one of my favorite stations, the one located in Central Park in New York City. Somehow, the early-year temperatures reported for the month of July for that very prominent station had been substantially adjusted downward, thus notably enhancing a previously-slight warming trend:

Central Park July Average.png

Go through the various posts in this series to find dozens more of such examples.

But how exactly are these downward adjustments accomplished? Just what are the games that they are playing?

Close observers of this subject have long recognized that the principal issue is something called “homogenization.” The custodians of the temperature records — principally two U.S. government agencies called NOAA and NASA — are quite up-front in declaring that they engage in “homogenization” of the temperature data. Here is an explanation from NOAA justifying changes they have made in coming up with the latest version (version 4) of their world surface temperature series known as Global Historical Climate Network. Excerpt:

Nearly all weather stations undergo changes in the circumstances under which measurements are taken at some point during their history. For example, thermometers require periodic replacement or recalibration and measurement technology has evolved over time. . . . “Fixed” land stations are sometime relocated and even minor temperature equipment moves can change the microclimate exposure of the instruments. In other cases, the land use or land cover in the vicinity of an observing site can change over time, which can impact the local environment that instruments are sampling even when measurement practice is stable. All of the these different modifications to the circumstances of recording near surface air temperature can cause systematic shifts in temperature readings from a station that are unrelated to any real variation in local weather and climate. Moreover, the magnitude of these shifts (or “inhomogeneities”) can be large relative to true climate variability. Inhomogeneities can therefore lead to large systematic errors in the computation of climate trends and variability not only for individual station records, but also in spatial averages.

That sounds legitimate, doesn’t it? Of course, they completely slide over the fact that “homogenization” means that essentially all temperatures as now reported have been adjusted to some greater or lesser degree; and they definitely never mention the fact that the practical result of the so-called “homogenization” process is significant downward adjustments in earlier-year temperatures. So, without the adjustments, is there a real underlying warming trend? Or is the whole “warming” thing just an artifact of the adjustments? And are the adjustments as actually implemented fair or not? How do you know that they are not using the adjustment process to reverse-engineer the warming trend that they need to keep the “climate change” gravy train going?

Making things far worse is that the adjusters do not disclose the details of the methodology of their adjustments. Here is what they say about their methodology on the NOAA web page:

In GHCNm v4, shifts in monthly temperature series are detected through automated pairwise comparisons of the station series using the algorithm described in Menne and Williams (2009). This procedure, known as the Pairwise Homogenization Algorithm (PHA), systematically evaluates each time series of monthly average surface air temperature to identify cases in which there is an abrupt shift in one station’s temperature series (the “target” series) relative to many other correlated series from other stations in the region (the “reference” series). The algorithm seeks to resolve the timing of shifts for all station series before computing an adjustment factor to compensate for any one particular shift. These adjustment factors are based on the average change in the magnitude of monthly temperature differences between the target station series with the apparent shift and the reference series with no apparent concurrent shifts.

Not very enlightening.

Anyway, a guy named Tony Heller, who runs a web site called The Deplorable Climate Science Blog, has just come out with a video that explains very simply and graphically how this “homogenization” process is used to lower early-year temperatures and thus create artificial warming trends. The video is only about 8 minutes long, and well worth your time:

The video focuses on the region of Buenos Aires, Argentina, to demonstrate the process. In that area, there are only three temperature stations with long-term records going back as far as the early 1900s. One is called the Buenos Aires Observatory, and is located right in the middle of downtown Buenos Aires. Obviously, this station has been subject to substantial warming caused by what is known as the “urban heat island” — the buildup of asphalt and concrete and air conditioning and heating and so forth in the area immediately surrounding the thermometer. The other two sites — Mercedes and Rocha — are located a some distance from town in areas far from urban buildup.

Heller shows both originally-reported and adjusted data for each of the three sites. It won’t surprise you. The Buenos Aires observatory site shows strong warming in the originally-reported data. The other two sites show no warming in the originally-reported data. The “homogenization” changes have adjusted all the thermometers to reflect a common trend of warming. For downtown Buenos Aires, the changes have somewhat reduced the originally-reported warming. For the other two stations, the changes have introduced major warming trends that did not exist at all in the originally-reported data. The trend has been inserted without changing the ongoing reporting of current data, which inherently means that the way the trend has been introduced has been by reducing the earlier-year temperatures.

In short, the bad data from downtown Buenos Aires has been used to contaminate the good data from the other two sites, and to create a false warming trend.

Is there anything honest about this? In my opinion, there is no possibility that the people who program the “homogenization” adjustments do not know exactly what the result of their adjustments will be in the temperature trends as reported to the public. In other words, it is an intentional deception. Heller asserts that the data from downtown Buenos Aires is obviously tainted by the urban heat island effect, and that the correct thing to do (if you were actually trying to come up with a temperature series to detect atmospheric warming) would be to discard the Buenos Aires data and use the unadjusted readings from the other two stations. I have to say that I agree with him.

Comments are closed.