New Zealand’s NIWA scientists accused of CRU-style fudging of data

Loading

H/T to PatVann

“Warmists” have room to celebrate… in an ironic sort of way. “Man-made” global warming indeed appears to have been confirmed. As put so indelicately to the “warmist” ears in a paper released yesterday by Climate Science Coalition:

We have discovered that the warming in New Zealand over the past 156 years was indeed man-made, but it had nothing to do with emissions of CO2—it was created by man-made adjustments of the temperature. It’s a disgrace.

Yes, folks… in addition to the unintended and embarrassing release of internal emails and data from CRU discussing their deliberate data manipulation to “hide the decline” of temperatures over the past weekend, now there is a second climate change group – NIWA (New Zealand’s National Institute of Water and Atmospheric Research) – under assault for manipulating temperature measurements.

iwishart over at TBR.cc has the latest breaking news.

The New Zealand Government’s chief climate advisory unit NIWA is under fire for allegedly massaging raw climate data to show a global warming trend that wasn’t there.

The scandal breaks as fears grow worldwide that corruption of climate science is not confined to just Britain’s CRU climate research centre.


In New Zealand’s case, the figures published on NIWA’s [the National Institute of Water and Atmospheric research] website suggest a strong warming trend in New Zealand over the past century:

NZ NIWA graph adjusted

The caption to the photo on the NiWA site reads:

From NIWA’s web site — Figure 7: Mean annual temperature over New Zealand, from 1853 to 2008 inclusive, based on

between 2 (from 1853) and 7 (from 1908) long-term station records. The blue and red bars show annual differences from the

1971 – 2000 average, the solid black line is a smoothed time series, and the dotted [straight] line is the linear trend over 1909 to 2008 (0.92°C/100 years).

Shall we have a look at the raw data, pooh pooh’ed as so irrelevant by the warmist crowd?

NZ NIWA graph raw data

This picture of actual temperatures whistles a different tune altogether.

According to the Climate Science Coalition:

What did we find? First, the station histories are unremarkable. There are no reasons for any large corrections. But we were astonished to find that strong adjustments have indeed been made.

About half the adjustments actually created a warming trend where none existed; the other half greatly exaggerated existing warming. All the adjustments increased or even created a warming trend, with only one (Dunedin) going the other way and slightly reducing the original trend.

The shocking truth is that the oldest readings have been cranked way down and later readings artificially lifted to give a false impression of warming, as documented below. There is nothing in the station histories to warrant these adjustments and to date Dr Salinger and NIWA have not revealed why they did this.

One station, Hokitika, had its early temperatures reduced by a huge 1.3°C, creating strong warming from a mild cooling, yet there’s no apparent reason for it.

The comparison between the raw data and the adjusted data was made possible after a colleague of Dr. Jim Salinger – formerly with NIWA – provided the information after years of direct requests to Salinger himself went unanswered. Salinger started this graph in the 1980s when he was at – yep, you guessed it – CRU (Climate Research Unit at the University of East Anglia, UK.

For an even more interesting contrast, here is just one of the seven graphs provided in the Climate Science Coalition’s paper linked above, showing the raw data compared to the adjusted data. The graph below is the Auckland station. [Navigate to the first link above, and to pages five thru eight to see them all.[

Auckland temp anomolies jpg

NIWA’s Chief climate Scientist, Dr David Wratt, has responded… defending the adjustments to the data and stating that “Warming over New Zealand through the past century is unequivocal.”

NIWA’s analysis of measured temperatures uses internationally accepted techniques, including making adjustments for changes such as movement of measurement sites. For example, in Wellington, early temperature measurements were made near sea level, but in 1928 the measurement site was moved from Thorndon (3 metres above sea level) to Kelburn (125 m above sea level). The Kelburn site is on average 0.8°C cooler than Thorndon, because of the extra height above sea level.

Such site differences are significant and must be accounted for when analysing long-term changes in temperature. The Climate Science Coalition has not done this.

NIWA climate scientists have previously explained to members of the Coalition why such corrections must be made. NIWA’s Chief Climate Scientist, Dr David Wratt, says he’s very disappointed that the Coalition continue to ignore such advice and therefore to present misleading analyses.

Yup… another pissing contest it seems.

What remains somewhat a mystery to most of us is just why “raw data” needs to be “adjusted” at all. Mind you, it’s not that many a psudo-climate-scientist hasn’t tried to explain it all away with sundry reasons. But none of them lend credibility to the manipulative massaging itself.

For example, here’s NIWA’s explanation for data manipulation.

Where there is an overlap in time between two records (such as Wellington Airport and Kelburn), it is a simple matter to calculate the average offset and adjust one site relative to the other.

Wellington Airport is +0.79°C warmer than Kelburn, which matches well with measurements in many parts of the world for how rapidly temperature decreases with altitude.

Thorndon (closed 31 Dec 1927) has no overlap with Kelburn (opened 1 Jan 1928). For the purpose of illustration, we have applied the same offset to Thorndon as was calculated for the Airport.

The final “adjusted” temperature curve is used to draw inferences about Wellington temperature change over the 20th century. The records must be adjusted for the change to a different Wellington location.

Okay… so this is akin to when I want to know the temperature in my yard, fergit the thermometer on the property. Instead we’ll take some mean average between a few stations *not* located in my back yard, average them out, adjust them for different measure times and dates, fill in some missing measure times with a formula and voila…. that’s the temperature in my back yard.

Once the eyes stop glazing over with the science mumbo jumbo that defies all logic, I stumbled on this 2001 published paper in the Internation Journal of Climatology, and published online in Wiley Interscience, and copyrighted in 2002 by the Royal Meteorological Society. Title? PROBLEMS IN EVALUATING REGIONAL AND LOCAL TRENDS IN TEMPERATURE: AN EXAMPLE FROM EASTERN COLORADO, USA

This 14 page abstract documents accuracy and results of raw and adjusted data from 11 measurement stations in Colorado to explore the potential shortcomings of many climate-change studies.

There are two common problems with many climate change studies. First, many analyses are made using single (or a few) weather stations in conjunction with small-scale experiments (e.g. Alward et al., 1999) or observational studies of particular natural areas (e.g. Singer et al., 1998, Williams et al., 1996).

A problem arises when the authors (or the readers) generalize regional patterns from single stations, single seasons, or a few parameters over a short duration. For example, Alward et al. (1999) innocently described increasing minimum temperatures of 0.12 °C/year since 1970 at one weather station at the Central Plains Experiment Range (CPER) in northeastern Colorado. The journal editor requested altering the title of the paper to ‘Grasslands and Global Nocturnal Warming’. Melillo (1999) used the results of the Alward et al. (1999) as further evidence that the Central Grasslands in the USA and the Earth were warming.

Escalating the issue further, the news media report on this work sensationalized that ‘Global warming could mean trouble for ranchers on the plains of Colorado and New Mexico’ (Associated Press). This chain of events may not be uncommon, and few stop to ask whether neighbouring stations show similar long-term trends.

A second common problem in climate-change studies occurs when national or global-scale coverages of coarse-grained (e.g. 3.75° latitude/longitude grid interval) general circulation model (GCM) simulations are used to infer possible regional climate trends. Several co-authors of this paper confess to falling into this easy trap in proposal writing and simplifying introduction sections in popular articles. For example, Stohlgren et al. (1995) stated that ‘current projections for a double-CO2 climate in the next 50 years generally show a warming of 3–4 °C’, based on GCM simulations published by Wilson and Mitchell (1987) and Houghton et al. (1990). More recently, as part of a national assessment of the effects of global change, the Canadian Climate Center and Hadley Centre GCMs were used to show possible scenarios for the Central Great Plains (D. Ojima, unpublished report, 1997). For all of Colorado, the models simulated a uniform increase in minimum temperatures of 5–6 °C (Canadian model) or 1–2 °C (Hadley model) from 1990 to 2090. For maximum temperatures, the Canadian model simulated increases of 5–6 °C for northeastern Colorado and 6–7 °C increases for southeastern Colorado over the next century, whereas the Hadley model simulated modest increases of 2–3 °C uniformly over most of the ten-state region.

~~~

These two problems may not be severe if spatial variation in climate is minimal. However, if several climate stations in a relatively homogeneous landscape do not behave similarly for several weather parameters, then extrapolating regional trends from single (or a few) sites, or generalizing regional trends from coarse-scale climate models will not be correct.

~~~

Spatial and temporal variations are enormous and they cannot be ignored. Given the spatial variation in the magnitude and direction of change for the parameters tested (Tables III–VI), and highlighted by the geographical anomalies of neighbouring sites (Figure 2), we believe that no one site evaluated in this study is ‘typical’ of the region. The alarming trend in minimum temperature of 0.12 °C/year reported by Alward et al. (1999) for CPER since 1970 was four times higher than the average of the other ten stations used in this study over the same period (0.03 °C/year) and 12 times higher than the long-term (80+ years) average rate (0.01 °C/year). The site was also atypical in trends in summer temperatures and growing-season days.

Extrapolation of results from one site to another site, or from one site to the region, would be highly suspect. It may be that clusters of weather stations in other regions behave reasonably the same for most parameters and over all time periods, but our results suggest otherwise.

Climate researchers working at one or a few sites must clearly state the limitations of the data for extrapolation, including the high probability that some portion of the climate data may be a geographic anomaly (Figure 2), as well as understand sources of incongruity in single-station records (e.g. changes in station location, instrumentation, and local environment). Prudent researchers wishing to make regional extrapolations, or simply wishing to measure how typical or representative their primary study site is, may want to increase greatly the number of stations examined. Readers and users of single-site studies are likewise cautioned about extrapolating results from one site to the region and larger domains.

~~~

The fact that there is so little agreement between the modelled historic record and observed temperature records at the regional level (Doherty and Mearns, 1999) may be due to the magnitude of spatial variation. Discrepancies commonly ran 3 to 9 °C comparing observed and simulated values regionally smoothed from a grid width of 0.5° interpolated to a 2° grid. Temperature increases in rapidly urbanizing Fort Collins were typically three times higher than in rural sites (Table III), exaggerating the regional trend (Gallo and Owen 1998, Gallo et al., 1999). Likewise, local land-use effects from irrigated agriculture have been linked to a cooling effect in the summer (Stohlgren et al., 1998, Chase et al., 1999). Present GCMs lack the ability to simulate the complex climate patterns and anomalies in the simple topography but heterogeneous landscape of eastern Colorado. We propose that whenever GCM simulations are used for regional purposes, the results must be presented with some estimate of spatial accuracy of results, some comparison with historic data, and some level of uncertainty clearly presented, or they will be of limited use in making management and policy decisions at local and regional scales.

The public and the media should be more sceptical about long-term climate projections for local and regional use. Simple averaged projections of average temperature change from 100 years ago until today would not have adequately described changing temperatures in eastern Colorado. Sadly, even the best climate projections of today for 100 years from now will likely suffer a similar fate.

This section on pgs 15-16, where they analyzed the raw data, then attempted to apply various corrections common in “climate science” more readily explains the reality that these corrections – meant to increase accuracy of the raw data – did not result in a better quality dataset.

We attempted to apply the time of observation adjustments using the paper by Karl et al. (1986). The actual implementation of this procedure is very difficult, so, after several discussions with NCDC personnel familiar with the procedure, we chose instead to use the USHCN database to extract the time of observation adjustments applied by NCDC. We explored the time of observation bias and the impact on our results by taking the USHCN adjusted temperature data for 3 month seasons, and subtracted the seasonal means computed from the station data adjusted for all except time of observation changes in order to determine the magnitude of that adjustment. An example is shown here for Holly, Colorado (Figure 1), which had more changes than any other site used in the study.

[Mata Musing: See document for associated graph

What you would expect to see is a series of step function changes associated with known dates of time of observation changes. However, what you actually see is a combination of step changes and other variability, the causes of which are not all obvious. It appeared to us that editing procedures and procedures for estimating values for missing months resulted in computed monthly temperatures in the USHCN differing from what a user would compute for that same station from averaging the raw data from the Summary of the Day Cooperative Data Set.

This simply points out that when manipulating and attempting to homogenize large data sets, changes can be made in an effort to improve the quality of the data set that may or may not actually accomplish the initial goal.

Apparently, when models and simulations, and/or adjustments and corrections were compared to actual measurements, there was very little agreement between actual temperatures recorded. If models, simulations and adjustments are so flawed and unpredictable… even using 11 stations (what was NZ using… seven??) what most of us can come away with is that we really know jack sheeeeet about the supposed accuracy obtained by even well-intended data manipulation and averaging.

Meaning, of course, forecasting the weather and climate is pretty much like rolling the dice on the crap table.

One thing is certain. Both NIWA and CRU’s actions have pointed to a concerted effort to emulate a rising trend of warming…. the real temps, amount of sampling be damned. This is a wholly separate argument than the validy of AGW. This is a matter of the declining morality of science, and it’s unholy alliance with politicians.

When the very economic survival of developing nations, and even the 3rd world countries who depend upon wealthier countries to give them a leg up, depends upon accuracy, we can be sure we have none in the “climate science” realm.

0 0 votes
Article Rating
Subscribe
Notify of
15 Comments
Inline Feedbacks
View all comments

Eagerly awaiting the next domino…..

The AGWWCCBS (Anthropogenic Global Warming Wall of Climate Change Bull Shit) is coming down.

My old B-Law teacher always used a great west Texas phrase for the worst of the liars and scoundrels: “SORRY”. Not as in remorse, as in worthless. As in, “Joe Scientist, you are SORRY for manipulating this data.”

For an excellent critique of junk science, I would point the reader to the wonderful speeches of Michael Crichton at http://www.crichton-official.com/speeches.html. The following quotes are from his talk on “Aliens Cause Global Warming”:

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.

Also:

Nobody believes a weather prediction twelve hours ahead. Now we’re asked to believe a prediction that goes out 100 years into the future? And make financial investments based on that prediction? Has everybody lost their minds?

Stepping back, I have to say the arrogance of the modelmakers is breathtaking. There have been, in every century, scientists who say they know it all. Since climate may be a chaotic system-no one is sure-these predictions are inherently doubtful, to be polite. But more to the point, even if the models get the science spot-on, they can never get the sociology. To predict anything about the world a hundred years from now is simply absurd.

Look: If I was selling stock in a company that I told you would be profitable in 2100, would you buy it? Or would you think the idea was so crazy that it must be a scam?

Great thoughts.

If these scientists could manipulate data like that, what can stop others to change voting numbers for example. Say the next elections?

Check this link out from the hacker emails…You’ll note in particular that Salinger, Renwick and Mullan are in the short list in this email where one of the ‘bad’ papers is being group-dissected:

http://www.eastangliaemails.com/emails.php?eid=988&filename=1248790545.txt

Kiwi NIWA folks up to their necks in fudging…

@URI: “Next election?” That’s exactly what happened in the Minnesota senate race in 2008.

@RatDog said: “arrogance of the modelmakers is breathtaking. There have been, in every century, scientists who say they know it all. “

You stole my thunder. I plan to hit that same point in an upcoming post. Remember how early astronomers argued that the earth was the center of the Solar System with the Sun and planets revolving around our globe?

They made elaborate models to prove their point but the models always came up short. So instead of re-examing their “science” they attacked those with a different view as heretics and tried to destroy them.

Sort of similar to what is happening today.

And Gore and his friends dare to call us non-believers “deniers” or “flat earthers.”

The fallout begins in New Zealand. Several MP’s have relinquished their “frontbench” positions over this. It’s having a huge effect there, and the voters have noticed. The comments are revealing.

http://www.abc.net.au/news/stories/2009/11/26/2754654.htm

Obama’s Climate ‘czar’ Carol Browner says hacked e-mails don’t change anything:

http://tinyurl.com/ya53g6j

I believe the new term is “reality deniers” for the warmist. On Michael Crichton, check this:

http://www.libertyparty.org/node/604

Good response to his work and genius.

Patvann, thats link is

http://www.abc.net.au/news/stories/2009/11/26/2754654.htm

Australia, not New Zealand..

New Zealand Carbon tax (ETS, has already been passed into law), NZ just last week.. we are sheep…( I am a kiwi)..

Hackers and techkies will be neded to prove the world that Liberals are a bunch of shit?. I heard Russians are really good at it and so are the Chinese. The latter one would be more interested in “monitoring” our next election so that they can get their money back.

Thank you Fred. It all coming on so fast and furious, I’m getting my islands confused. 😉

@Maggie
Carol is one of the linchpins of this whole debacle. She’s been there since the beginning of all this back in the early 90’s under Clinton. She turned the EPA into the mess it is today. She needs to be “Beckatized”.

Here’s one for ya…While she is an avid socialist, greenie, and, has a record of destroying documents, she is also married to a powerful oil-industry lobbyist.

An interesting look from American Thinker too:

http://tinyurl.com/yjdskkn

Fact is, a lot of people need to make a lot of noise about this. Limbaugh spent this week of shows ‘yelling’ about this. Beck has covered it too. (Don’t know about Hannity, as I don’t listen to him too much anymore).

The Tea Parties need to start covering this too and demanding open investigations here in this country and around the world by independent panels of scientists.

That Australia is a big island, or is it?