Don't Tell Anyone, But We Just Had Two Years Of Record-Breaking Global Cooling


Forum Staff
Apr 2013
La La Land North
Then explain the link on the lake study
Easy. Look at the one graph in the "paper". Do a least squares on it. It has a positive slope. Warming. No matter how much they try to wiggle and dance.
Feb 2007
Global Warming Basics: Trend Games

Posted on April 15, 2016 | 15 Comments


Quite recently the author of a post at the WUWT blog felt the need to tell us that there has been “No Statistically Significant Satellite Warming For 23 Years.” For those who don’t already know, WUWT is one of the most prominent blogs which denies the danger of man-made climate change. He shows you this:

This isn’t made-up data like in my illustrative example, it’s satellite data for the temperature in the lower troposphere (the bottom several miles of the atmosphere). The flat line has a slope which is the bottom of the confidence interval (basically, zero). His “point” is that maybe the slope is zero — not warming. Maybe it’s not warming — yay!

But he doesn’t even show the data that came before, the data that establish a trend which is upward, above zero, with statistical significance. He also doesn’t say (and maybe isn’t even aware) that the top of the confidence interval includes the trend that existed beforehand, so there’s really not sufficient evidence to conclude that the trend changed — but that’s a question you can only ask if you have access to data from beforehand. Which we do.

Maybe he actually believes it. Maybe he doesn’t realize that when you’re investigating a trend, whether or not it’s meaningful, you don’t get to ignore what you’ve already seen. If data since 1993 were all we had from satellite measurements, that’s all you can include. But we have more, and acting as though it doesn’t exist is bad practice.

Returning to my made-up sample data, ignoring context amounts to making a model of the trend that looks like this:

It completely isolates the two time spans — before and after — achieving not statistically significant for last 15 years. Take a good look at the trend line in red: it’s a discontinuous trend, what I often call a “broken” trend. When it comes to global temperature, a broken trend is non-physical. Perhaps the trend really did change, that’s physically sensible, but it should at least be continuous, without some sudden jump from one value to another. If we use math to fit a model which does include a trend change at that time, but requires it to be continuous, we get this:

It no longer even “looks like” the trend has recently been flat. And when we check the stats, the confidence interval for the last-15-years trend no longer includes zero; recent warming actaully is statistically significant.

Only by ignoring what came before, allowing for a non-physical “broken” trend, can you claim “not statistically significant” for the last 15 years. But those who deny the danger of man-made climate change do exactly that: they ignore context and create models with a broken trend. Maybe, perhaps even probably, they’re not aware of precisely what they’re doing and its nonsense nature. But given that they’re unaware of how to do it right — how much should we rely on their claims?

There’s another crucial aspect to their common habit of ignoring, even concealing, the data that put things in proper context. When they choose a time at which to start, they don’t do it because there’s a good laws-of-physic reason, or because they have solid statistical evidence that it’s when the trend really did change. They choose their start time specifically because it gives them the result they want. There’s a name for that process: it’s called cherry-picking. It has a profound effect on the statistics — you’re much, much more likely to get the result you want if you’re allowed to pick and choose from a large number of options (just like you’re much more likely to win the lottery if you buy a lot of lottery tickets, than if you only buy one).

Sometimes it’s done innocently by those who aren’t aware of what they’re doing or how it affects the reliability of their conclusions. Far too often, it’s done deliberately, by those who either know better, or should know better, just how much it poisons the validity of their analysis and how misleading it is. When they have an agenda, political or ideological, they’re willing. We shouldn’t be.

In case you’re wondering how this impacts real-world data rather than the made-up data I used for my examples, let’s see what happens when we take a close-up of the data used by the poster at WUWT, the satellite data for lower-troposphere temperature (TLT) from Remote Sensing Systems (RSS). Here’s all of it (as of this writing), not just the cherry-picked time span shown on WUWT:

The thick, straight red line shows the estimated trend when you use all the data. The thick broken blue line shows what you get if you split it into two pieces because you’re going to show the later stuff only and declare the trend is “not statistically significant.” Notice how, by breaking the trend at a pre-selected moment, we can make both segments seem to be warming more slowly than the entire time span. That’s some masterful cherry-picking.

If we allow a trend change at the moment the WUWT author selected, but require the trend to be continuous rather than broken, we get this:

I’ve plotted the “trend-change” line in blue as before, but this time I plotted it as a thick dashed line. That’s because it’s so close to the “no-trend-change” line (in red) that if I used a solid line, they’d be on top of each other and you wouldn’t be able to see them both.

This whole idea that Earth was warming but isn’t any more, that recent data show maybe there’s no more global warming, is a sham. Presenting evidence to support that requires the kind of trend games that make analysis unreliable, including non-physical broken trends, failure to show context, and especially careful selection of start times for the specific purpose of getting a desired result — cherry-picking, one of the most common and favorite techniques of those who deny the danger of mand-made climate change.
Feb 2007
Temperature data is not “the biggest scientific scandal ever”
Do we have to go through this every year?

John Timmer - 2/9/2015, 12:41 PM

Over the weekend, another editor pointed me to this piece in The Telegraph in which columnist Christopher Booker calls scientists' handling of the temperature data "the biggest science scandal ever." The same piece also appeared in a discussion today and was sent in via the reader-feedback form. So, it seemed worth looking into.

Doing so caused a bit of a flashback—to January 2013, specifically. That was the last time that the previous year had been declared the warmest on record, an event that apparently prompts some people to question whether we can trust the temperature records at all.

The culprit that time was Fox News, but the issue was the same: the raw data from temperature measurements around the world aren't just dumped into global temperature reconstructions as-is. Instead, they're processed first. To the more conspiracy minded, you can replace "processed" with "fraudulently manipulated to make it look warmer."

Why do they have to be processed at all? Because almost none of the records are continuous. Weather stations have moved, they've changed the time of day where the temperature-of-record is taken, and they've replaced old thermometers with more modern equipment. All of these events create discontinuities in the record of each location, and the processing is used to get things into alignment, creating a single, unified record.

Does it work? The team behind the Berkeley Earth project performed a different analysis in which they didn't process to create a single record and instead treated the discontinuities as breaks that defined separate temperature records. Their results were indistinguishable from the normal analysis.

We knew this already; we knew it two years ago when Fox published its misguided piece. But our knowledge hasn't stopped Booker from writing two columns using hyped terms like "scandal" and claiming the public's being "tricked by flawed data on global warming.” All of this based on a few posts by a blogger who has gone around cherry picking a handful of temperature stations and claiming the adjustments have led to a warming bias.

Why would Booker latch on to this without first talking to someone with actual expertise in temperature records? A quick look at his Wikipedia entry shows that he has a lot of issues with science in general, claiming that things like asbestos and second-hand smoke are harmless, and arguing against evolution. So, this sort of immunity to well-established evidence seems to be a recurring theme in his writing.

But the whole thing demonstrates two annoying aspects of the climate debate. The first is that, when people don't like the records that the human-driven warming is setting, they start to argue the record keeping is invalid. Booker has decided to repeat an attack on the temperature data that underlies it, but others have attacked the statistical analysis of those temperatures, suggesting the scientists were hiding something. One of those scientists, Gavin Schmidt, helpfully pointed out that they hid the statistics so well that they were visualized in a slide used at the press conference announcing the record.

The second aspect is that people like Booker (and the blogger whose work he's promoting) repeatedly try to take advantage of the public's limited attention to this topic. I happen to be aware of things like Berkeley Earth and the same arguments surfacing in 2013 simply because I covered them at the time and therefore read up on them in detail. The public won't have that knowledge, so Booker's claims can sound like a damaging revelation—and completely new.

They're not. But I'll bet that if 2015 sets a temperature record, I'll be able to rerun this story with little more than the names changed.
Feb 2007
Climatologists have manipulated data to REDUCE global warming
Climatologists are continually accused of fiddling with the data to make global warming stronger for political purposes by political activists.

A typical scam is to show a few stations that have been adjusted upwards and act as if that is typical. For example, recently The Telegraph article, "The fiddling with temperature data is the biggest science scandal ever", wrote about someone comparing

temperature graphs for three weather stations in Paraguay against the temperatures that had originally been recorded. In each instance, the actual trend of 60 years of data had been dramatically reversed, so that a cooling trend was changed to one that showed a marked warming.​
Three, I repeat: 3 stations. For comparison, global temperature collections contain thousands of stations. CRUTEM contains 4,842 quality stations and Berkeley Earth collected 39,000 unique stations. No wonder some are strongly adjusted up, just as some happen to be strongly adjusted down. In fact it would be easy to present a station where the raw data shows a decreasing trend of several degrees being adjusted upwards, but then the reader might start to think if the raw data is really better.

What these people do not tell their readers is that the average trend over all station is only adjusted upwards slightly. That would put things too much in perspective. What these people do not tell their readers is why these adjustments are made. That might make some think that it may make sense. What these people normally do not tell their readers is how these adjustments are made. That would not sound sufficiently arbitrary and conspirational.

Last year we had similar scams about two stations in Australia and the stations in New Zealand.

In an internet poll, 88% of the readers of the abysmal Telegraph piece agree with the question: "Has global warming been exaggerated by scientists?"

I hope that after reading this post, these 88% will agree that they have been conned by The Telegraph. That scientists have actually made global warming smaller.

Zeke Hausfather, an independent researcher that is working with Berkeley Earth, made a beautiful series of plots to show the size of the adjustments.

The first plot is for the land surface temperature from climate stations. The data is from the Global Historical Climate Dataset (GHCNv3) of NOAA (USA). Their method to remove non-climatic effects (homogenization) is well validated and recommended by the homogenization community.

They adjust the trend upwards. In the raw data the trend is 0.6°C per century since 1880 while after removal of non-climatic effects it becomes 0.8°C per century. See below. But it is far from changing a cooling trend into strong warming.

(In case you believe many national weather services are also in the conspiracy: a small part of the GHCNv3 raw data was already homogenized before they received it.)

Not many people know, however, that the sea surface temperature trend is adjusted downward. That does not fit the narrative of WUWT & Co. It sounds like even many scientists did not know that. These downward adjustments happen to be about the same size, but go into the other direction. See below the sea surface temperature of the Hadley Centre (HadSST3) of the UK MetOffice.

Being land creatures people do not always realise how big the ocean is. Thus if you combine these two temperature signals taking the area of the land and the ocean into account you get the result below. The net effect of the adjustments is a reduction of global warming.

It is pure coincidence that this happens, the reasons for the adjustments are fully different.

The land surface temperature trend has to be adjusted up because old temperatures were often too high due to insufficient protection against warming by the sun and possibly because the siting of the stations improved. There are likely more reasons.

The sea surface temperature are adjusted downward because old measurements were made by taking a bucket of water out of the ocean and the water cooled by evaporation during the temperature measurement. Furthermore, modern measurements are made at the water inlet of the engine and the hull of the ship warms the water a little before it is measured.

But while it is a pure coincidence and while other datasets may show somewhat different numbers (the BEST adjustments are smaller), the downward adjustment does clearly show that climatologists do not have an agenda to exaggerate global warming. Like all reasonable people already knew. That would still be true if the adjustments had happened to go upward.
Sep 2018
cleveland ohio
changes in weather are just that weather, trend over decades and centuries are global warming
Sep 2018
cleveland ohio
you chery pick
Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related cases or data that may contradict that position. It is a kind of fallacy of selective attention, the most common example of which is the confirmation bias.[1][2] Cherry picking may be committed intentionally or unintentionally. This fallacy is a major problem in public debate.[3]

The term is based on the perceived process of harvesting fruit, such as cherries. The picker would be expected to only select the ripest and healthiest fruits. An observer who only sees the selected fruit may thus wrongly conclude that most, or even all, of the tree's fruit is in a likewise good condition. This can also give a false impression of the quality of the fruit (since it is only a sample and is not a representative sample).

Cherry picking has a negative connotation as the practice neglects, overlooks or directly suppresses evidence that could lead to a complete picture.

A concept sometimes confused with cherry picking is the idea of gathering only the fruit that is easy to harvest, while ignoring other fruit that is higher up on the tree and thus more difficult to obtain (see low-hanging fruit).

Cherry picking can be found in many logical fallacies. For example, the "fallacy of anecdotal evidence" tends to overlook large amounts of data in favor of that known personally, "selective use of evidence" rejects material unfavorable to an argument, while a false dichotomy picks only two options when more are available. Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to reality.[4]
Sep 2015
Brown Township, Ohio
Climatologists need government research grants to thrive Take away their free money and they wither and die on the vine.

edit: Cherry picking has a multitude of definitions. One example is cherry picking troubleshooting as used by ignorant and inefficient electricians and I use electrician loosely. Divide and conquer is the best kind of electrical troubleshooting of which there are two kinds, top down or bottom up.
Last edited:
  • Like
Reactions: ptif219