Posted by Curt on 29 January, 2017 at 8:25 am. 2 comments already!


Robert Tracinski:

I recently wrote about the wretched reporting on the claim that 2016 was the “hottest year on record,” using as my main example a New York Times article by Justin Gillis that gave his readers none of the relevant numbers they could use to evaluate that claim. None of them. If you search for the actual numbers, you will eventually find that the effect they are claiming, the actual amount by which this year was hotter than previous years, is smaller than the margin of error in the data.

Shortly afterward, I got a revealing response from Gillis. I’ll fill in all the details for you, because the whole thing is an important case study in why you can’t trust mainstream reporting on global warming. But let’s just cut to the chase. When I asked him why he didn’t include the basic numbers we need to understand his story, he gave me this reply:

So if I understand this correctly, a reporter from the New York Times is telling me that his readers are too dumb to understand numbers.

I don’t believe this for a minute, and not just because I’ve lived through 30 years of New York Times readers telling me how terribly intelligent and sophisticated they are. The newspaper actually does have an educated audience, and more to the point, if its readers lack knowledge on a subject, the reporters are there to analyze the issues and explain them. That’s supposed to be their job.

But this exchange with Gillis started with him telling us that he doesn’t think it’s his job. As far as he’s concerned, the data is somebody else’s department. He points out that there was also an “infographic” associated with the article—prepared by and credited to somebody else—and that if we cared to peruse that, we could “positively drown yourself in numbers.”

Take a look for yourself.

In this infographic, we get a plot of monthly temperatures, with each dot representing a different month, going all the way back to 1880. Only six months out of the entire 137-year history are individually labeled, only two of them since 1990—February and March of 2016, which represent the tail end of a strong El Nino, a naturally occurring, temporary warm cycle. From that graph, could you actually reconstruct any meaningful data? Could you reconstruct averages for one year versus another, even approximately?

Don’t get out your ruler, it’s a rhetorical question.

The other graphic is even more useless for our purposes. It represents monthly temperatures as spirals emanating out from the center of a circle and overlapping on top of each other, making it even harder for anyone to separate out one year from another or discern the exact amount of difference between them. So appealing to these graphs to say “Here are my numbers” is no help whatsoever.

And where are the error bars? It is common for scientists to represent the range of error in their measurements by presenting a measurement not just as a single point, but as a bar covering an entire range. Not just “1.04 degrees,” but “somewhere between 0.94 and 1.14 degrees.” Every scientific measurement has a limit to its precision, based on the instruments and methods that are used. For a long time, temperature measurements were collected, not by some super-precise digital apparatus, but by having human beings walk up to a thermometer and visually read off the temperature from it and write it down. The size of the thermometer, the limits of human eyesight, and differences between individuals—one person might be more scrupulously precise than another—all mean that you have to make allowance for an inherent inaccuracy in the measurements.

Yet in that first New York Times graph, the monthly temperatures are represented by tiny little circles that represent a range of perhaps a few hundredths of a degree—much, much less than the actual margin of error in the data. This conveys a sense of false precision.

A graph is not the same thing as data. It is a picture of data. It’s easy to draw that picture in a way that is impossible for the reader to translate back to actual numbers, or in a way that is misleading. For example, by adjusting the scale on the graph, it’s easy to make small differences look big. You can make hundredths of a degree, which are statistically meaningless in this case, look like they really mean something.

Pushed a little on this, Gillis conceded that “there is no one number” for last year’s average global temperature, because it “depends on which of the five datasets you care to inspect,” and he went on to point to other complications. So because there are a lot of numbers that he could have presented, he decided to give us none?

This is, pretty obviously, a dodge. His original article did not tell us that the numbers are complicated and that they vary depending on who is collecting the measurements. His original article simply hyperventilated about how amazingly hot it is. All the complications are just his fallback position when challenged.

I agree that the data is complicated. If you really want to dig into it, you have to look at things like this.


But you, John Q. Public, should not have to wade through all of that. As I put it in my exchange with Gillis: “There’s a lot of data, and it’s complex? If only there were people whose job is to explain data to the public.” Those people are called “science journalists.” Or would be if there were any.

So let me take a moment to do Gillis’s job for him and present and explain a little of the data to you.

In my previous article, I already pointed to the one set of data that was actually reported more or less properly, with straightforward numbers and a margin of error. The numbers from the British Met Office (Meteorological Office) were reported on the same day as Gillis’s article and showed a difference in average temperature between 2015 and 2016 of 0.01C and margin of error of 0.10C, ten times larger. So the accurate headline is not “2016 Breaks Record for Hottest Year Ever,” but “Last Year’s Temperatures Indistinguishable from Previous Year.” It is crushingly boring, but truthful.

Gillis’s report was supposedly about two different sets of numbers produced by NASA and NOAA. If you hack through this lovely table, you find that the difference between the two years in NASA’s GISS Surface Temperature Analysis is 0.12C. It’s slightly more (0.18C) if you use the “meterological year” that follows the seasons and goes from December to November. But that’s not what most articles were reporting, so let’s stick with the regular calendar year. If you dig through this FAQ—isn’t this fun?—you find that NASA claims a margin of error for recent measurements of plus or minus 0.05C and for older measurements plus or minus 0.10C. That’s a bit dubious, as I’ll explain in a bit, but NASA admits, in nicely passive bureaucratese, that “accurate error estimates are hard to obtain.” So there’s some margin of error in their margin of error.

The data from NOAA, the National Oceanic and Atmospheric Administration, is less dramatic. Last year surpassed 2015 by only 0.04C. I couldn’t find a clear labeling of the margin of error for this number, but a description from 2014 gives it as plus or minus 0.09C. It’s certainly hard to imagine that any of these numbers are remotely accurate enough to make 0.04C a significant difference.

Oh, and since we’re drawing from all these different sets of numbers, we might as well throw in measurements of temperatures higher in the atmosphere taken by weather satellites. For the satellite data, a set known as UAH (after the University of Alabama in Huntsville) shows no particular warming trend for a very long time.

Read more

0 0 votes
Article Rating
Would love your thoughts, please comment.x