New ClimateGate Discovery: Hiding The Decline

Loading

I’m not surprised that the MSM is no where to be found on ClimateGate, but where are the scientists? Their journals and magazines? Why are they silent especially after this latest bombshell of a discovery:

One can only imagine the angst suffered daily by the co-conspirators, who knew full well that the “Documents” sub-folder of the CRU FOI2009 file contained more than enough probative program source code to unmask CRU’s phantom methodology.

In fact, there are hundreds of IDL and FORTRAN source files buried in dozens of subordinate sub-folders. And many do properly analyze and chart maximum latewood density (MXD), the growth parameter commonly utilized by CRU scientists as a temperature proxy, from raw or legitimately normalized data. Ah, but many do so much more.

Skimming through the often spaghetti-like code, the number of programs which subject the data to a mixed-bag of transformative and filtering routines is simply staggering. Granted, many of these “alterations” run from benign smoothing algorithms (e.g. omitting rogue outliers) to moderate infilling mechanisms (e.g. estimating missing station data from that of those closely surrounding). But many others fall into the precarious range between highly questionable (removing MXD data which demonstrate poor correlations with local temperature) to downright fraudulent (replacing MXD data entirely with measured data to reverse a disorderly trend-line).

In fact, workarounds for the post-1960 “divergence problem”, as described by both RealClimate and Climate Audit, can be found throughout the source code. So much so that perhaps the most ubiquitous programmer’s comment (REM) I ran across warns that the particular module “Uses ‘corrected’ MXD – but shouldn’t usually plot past 1960 because these will be artificially adjusted to look closer to the real temperatures.”

~~~

Clamoring alarmists can and will spin this until they’re dizzy. The ever-clueless mainstream media can and will ignore this until it’s forced upon them as front-page news, and then most will join the alarmists on the denial merry-go-round.

But here’s what’s undeniable: If a divergence exists between measured temperatures and those derived from dendrochronological data after (circa) 1960 then discarding only the post-1960 figures is disingenuous to say the least. The very existence of a divergence betrays a potential serious flaw in the process by which temperatures are reconstructed from tree-ring density. If it’s bogus beyond a set threshold, then any honest men of science would instinctively question its integrity prior to that boundary. And only the lowliest would apply a hack in order to produce a desired result.

And to do so without declaring as such in a footnote on every chart in every report in every study in every book in every classroom on every website that such a corrupt process is relied upon is not just a crime against science, it’s a crime against mankind.

Indeed, miners of the CRU folder have unearthed dozens of email threads and supporting documents revealing much to loathe about this cadre of hucksters and their vile intentions. This veritable goldmine has given us tales ranging from evidence destruction to spitting on the Freedom of Information Act on both sides of the Atlantic. But the now irrefutable evidence that alarmists have indeed been cooking the data for at least a decade may just be the most important strike in human history.

That’s not the only “correction” that was applied to the data….check this out:

In 2 other programs, briffa_Sep98_d.pro and briffa_Sep98_e.pro, the “correction” is bolder by far. The programmer (Keith Briffa?) entitled the “adjustment” routine “Apply a VERY ARTIFICAL correction for decline!!” And he/she wasn’t kidding. Now, IDL is not a native language of mine, but its syntax is similar enough to others I’m familiar with, so please bear with me while I get a tad techie on you.
Here’s the “fudge factor” (notice the brash SOB actually called it that in his REM statement):

yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7, 2.5,2.6,2.6,2.6,2.6,2.6]*0.75         ; fudge factor

These 2 lines of code establish a 20 element array (yrloc) comprised of the year 1400 (base year but not sure why needed here) and 19 years between 1904 and 1994 in half-decade increments.  Then the corresponding “fudge factor” (from the valadj matrix) is applied to each interval.  As you can see, not only are temperatures biased to the upside later in the century (though certainly prior to 1960) but a few mid-century intervals are being biased slightly lower.  That, coupled with the post-1930 restatement we encountered earlier, would imply that in addition to an embarrassing false decline experienced with their MXD after 1960 (or earlier), CRU’s “divergence problem” also includes a minor false incline after 1930.

Fudge factor indeed….factors that were put in to show warming when it didn’t exist.

AJStrata:

I have seen inklings of how bad the CRU code is and how it produces just garbage. It defies the garbage in-garbage out paradigm and moves to truth in-garbage out. I get the feeling you could slam this SW with random numbers and a hockey stick would come out the back end. There is no diurnal corrections for temperature readings, there are all sorts of corrupted, duplicated and stale data, there are filters to keep data that tells the wrong story out, and there are create_fiction sub routines which create raw measurements out of thin air when needed. There are modules which cannot be run for the full temp record because of special code used to ‘hide the decline’.

This is a smoking gun…..but the media is no where to be found and now that our media takes it upon themselves to provide cover for a certain political party and to push an agenda instead of actually reporting the news is it any wonder the MSM is dying a slow death? Look at Ed Driscoll’s, writing about the ACORN story, comparison of how the MSM is treating the email release of these global warming hacks to other releases of sensitive information:

…Orrin Judd spots this staggering moment of hypocrisy from the New York Times’ Andrew C. Revkin of their “Dot Earth” blog on Friday:

The documents appear to have been acquired illegally and contain all manner of private information and statements that were never intended for the public eye, so they won’t be posted here.

And they don’t contain any obvious state military secrets as well, unlike say the Pentagon Papers during the Vietnam War or more recently, the secrets of War on Terror, or any of a number of other leaked documents the Times has cheerfully rushed to print.

Back in 2006, when his paper disclosed the previously confidential details of the SWIFT program, which was designed to trace terrorists’ financial assets, New York Times executive editor Bill Keller said on CBS’s Face the Nation, “one man’s breach of security is another man’s public relations.” Of course, much like the rest of the media circling the wagons with ACORN, it’s not at all surprising that the Times circles the wagons when it’s necessary to save the public face of their fellow liberals.

Incidentally, Tom Maguire explains the perfect way to square the circle:

If Hannah Giles and James O’Keefe are done tormenting ACORN maybe they can figure out how to pose as underaged climate researchers…

Heh, indeed.™

Related: “LA Times Changes Its Mind: Science Doesn’t Matter On Climate Bill.”

Update: At the Weekly Standard, Michael Goldfarb adds, “As a journalist, there is no greater glory than publishing materials that were not meant to be published”:

If I could, I would only publish emails and documents that were never meant to see the light of day — though, unlike the New York Times, I draw the line at jeopardizing the lives of American troops rather than jeopardizing the contrived “consensus” on global warming.

And of course, the Times has those priorities exactly reversed. But then, for the Gray Lady, small government Republicans are “Stalinists”, but actual totalitarian governments are worthy of emulation and respect.

Curious how the release of sensitive documents is a-ok when it can fit the MSM’s agenda eh?

0 0 votes
Article Rating
Subscribe
Notify of
36 Comments
Inline Feedbacks
View all comments

Question to our CO2 haters.

If the humans pumped 1 billion tons of CO2 into the atmosphere each year, and the earth stopped adding to it, how many years will it take to increase the proportion of CO2 in the atmosphere from 0.038% to 0.04%?

Answer: 25,000 years. Because the atmosphere weighs 5,000,000,000,000,000 tons. (quadrillion)

Stare down the demons and they become demons no more.

The falsification (and/or fabrication) of data is a breach of basic scientific principles. Any scientist, regardless of discipline, needs to live with the data they produce and report. Of course, some data are rejected if they are outliers but still require an explanation of why they are rejected in the first place. If not, skepticism of the entire data set is invited. This is taught repeatedly at the undegraduate level. At the graduate and post-grad level, it is not reinforced as much because it is assumed the lesson has been taught and learned.

In this whole episode, the exchange of e-mails show the “climatologists” involved were engaged in scientific fraud by “fixing data” to meet a preconceived conclusion. It didn’t matter the data showed the opposite. It didn’t matter that it was contrary to what they learned as students. It didn’t matter they were flat-out lying. What did matter was to keep the grants and other forms of financial support flowing into their projects. I don’t find it particularly surprising that some of these so-called scientists were employed by governmental agencies here (like NCAR) and elsewhere (like the UK).

So far, the MSM is more interested in the e-mail folders/files being hacked. Of course, it would have been better if all this was revealed another way, but it would come out sooner or later when someone got to bragging … just like any criminal would do.

“Curious how the release of sensitive documents is a-ok when it can fit the MSM’s agenda eh?”

No, not curious. WRONG!!! It’s ridiculous how the media can’t figure out why it’s dying and why a MAJORITY of Americans don’t read them anymore. There is no honor in journalism anymore. If my child comes to me and says they want to major in journalism, I will think them replaced by an alien and deny they’re my child. Not really, but they’ve really tried hard to make themselves go the way of the dinosaur.

This PWNING is worthy of Andrew Breitbart.

I think I’ll buy a new Challenger Hemi and do burnouts all-day to celebrate, LOL

Don’t get a new car for burnouts. Get something old with a big damn carburator and no computers or traction control. A chevy 350 bored .030-over with crank from a 400, some 10-to-1 forged pistons, totaling 383 cubic in. Add aluminum racing heads, and a long-duration/high lift roller cam. It should dyno out at 505 horse @ 6200rpm and 480 ft/lb at 4200rpm. Stuff it into a 70 Camaro with a 3:55 posi rear, and a modern 5-speed manual trans. Also go ahead and upgrade the brakes and suspension. Then grin a lot at the clown-car driving bunch.

http://i635.photobucket.com/albums/uu80/Patvann/70Camaro.jpg

http://i635.photobucket.com/albums/uu80/Patvann/PatsSmallBlockChevrolet.jpg

Here is the problem with your vast leftists media conspiracy (MSM no where to be found);

You dropped the ball. The emails first appeared on the skeptic site “the Airvent” (http://noconsensus.wordpress.com/2009/11/19/leaked-foia-files-62-mb-of-gold/) on Thursday, Nov, 19. The leftists conspiratorial NYT published an on line story the very next day, here:

CNN even got to it before you did…what gives?

And another thing:

You post this:

But here’s what’s undeniable: If a divergence exists between measured temperatures and those derived from dendrochronological data after (circa) 1960 then discarding only the post-1960 figures is disingenuous to say the least. The very existence of a divergence betrays a potential serious flaw in the process by which temperatures are reconstructed from tree-ring density. If it’s bogus beyond a set threshold, then any honest men of science would instinctively question its integrity prior to that boundary. And only the lowliest would apply a hack in order to produce a desired result.

And here is the abstract from an old paper on the matter by the “tricksters: in question..Take note, they agree (Phillip Jones does) that the divergence exists! Why were you not aware of this?

Letters to Nature

Nature 391, 678-682 (12 February 1998) | doi:10.1038/35596; Received 14 May 1997; Accepted 11 November 1997

Reduced sensitivity of recent tree-growth to temperature at high northern latitudes

K. R. Briffa1, F. H. Schweingruber2, P. D. Jones1, T. J. Osborn1, S. G. Shiyatov3 & E. A. Vaganov4

1. Climatic Research Unit, University of East Anglia, Norwich NR4 7TJ, UK
2. Swiss Federal Institute of Forest, Snow and Landscape Research, Zürcherstrasse 111, CH-8903, Birmensdorf, Switzerland
3. Institute of Plant and Animal Ecology, Ural Branch of the Russian Academy of Sciences, 8 Marta Street, Ekaterinburg 620219, Russia
4. Institute of Forest, Siberian Branch of the Russian Academy of Sciences, Krasnoyarsk, Russia

Correspondence to: K. R. Briffa1 Correspondence and requests for materials should be addressed to K.R.B. (e-mail: Email: k.briffa@uea.ac.uk).

Top of page

Tree-ring chronologies that represent annual changes in the density of wood formed during the late summer can provide a proxy for local summertime air temperature1. Here we undertake an examination of large-regional-scale wood-density/air-temperature relationships using measurements from hundreds of sites at high latitudes in the Northern Hemisphere. When averaged over large areas of northern America and Eurasia, tree-ring density series display a strong coherence with summer temperature measurements averaged over the same areas, demonstrating the ability of this proxy to portray mean temperature changes over sub-continents and even the whole Northern Hemisphere. During the second half of the twentieth century, the decadal-scale trends in wood density and summer temperatures have increasingly diverged as wood density has progressively fallen. The cause of this increasing insensitivity of wood density to temperature changes is not known, but if it is not taken into account in dendroclimatic reconstructions, past temperatures could be overestimated. Moreover, the recent reduction in the response of trees to air-temperature changes would mean that estimates of future atmospheric CO2 concentrations, based on carbon-cycle models that are uniformly sensitive to high-latitude warming, could be too low.

PV, Nancy Pelosi is going to be upset if she sees that mechanical stuff in your garage. She thought you might be interested in a Prius. It has……oh well.

I am interest in a Prius…At target practice for my Big Mac. 🙂

(Happy Thanksgiving my friend.)

PV, now I know what they are good for. Toyota has soiled its nest anyway. Best Thanksgiving wishes to you and yours, my friend.

PS PV, tree rings vary quite a bit depending on their exposure to the sun, North slope, South slope, mucho run off, poco run off, thin soil, rich soil, peat, muskeg, glacial till, etc., but I am sure those guys wouldn’t cherry pick their data. Just a few notes from a guy that has fell a lot of big trees. You want tree ring data, give me the grant, I’ll get you whatever silly data you require, but expert scientific observation doesn’t come cheap.

Of course you’re right about the trees. Funny how I’ve recently read the very concerns recently. 😉 You can now call yourself a scientist like some Russians did. (They wrote a paper about their little hikes, and were awarded PHd’s for it.)

The hard-working field researchers know this about trees, as well as the folks who control the data. But those same folks instructed the programmers to ignore/modify that little aspect of the circle-of-life, then proceeded to tell us all we’re all gonna die because we might be .2 degrees warmer in 20 years unless we pay the UN and give up some sovereignty. They also used that data to pretend the medieval warming period didn’t happen to the extent that they needed it not to happen so that the late 90’s looked like the hottest ever. Tangled webs all over the place…

BTW. If you write a proposal to the right people, you can be paid several grand a day, (helicopters!) and maybe get some elk-loin to feast on atop it all. PETA might even turn a blind eye because you’ll be seen as on the “correct” side of “science” by them.

Then again, you probably enjoy clear-conscience-sleeping at night . 🙂

@Patvann: I’ve really appreciated your analysis of this data because it’s been very level-headed and unbiased, but I can’t make heads or tails of you accusation here. I’ve re-read the exchange several times and it looks pretty clearly that Caspar is going to create an environmental model; Tim, not knowing the model itself, will apply simulated instrumental noise to it; and then it will be re-leased to the public. Both Tim and Caspar, as challenge organizers are precluded from participating. In fact, as these e-mails originate from 2006, the challenge has already been initiated, you can read about it here and it clearly states who is involved in each sub-class of the hidden data, including Caspar in the model group itself.

Now, my point isn’t to getcha on some misinterpretation, but I do think this is an example of the knee-jerk assumptions others have been making in looking at these e-mail and claiming that use of the word “trick” or “hidden data” or whatever is damning of the entire field.

My other questions is INRE the larger point that Curt made, which is that certain code snippets include very blatant “fudge-factors” and biasing. Have you seen anything to suggest that this code was actually used in the published analysis and not just an experiment? For instance, I personally have plenty of code which analyzes how data should look given certain model assumptions … but there’s a huge leap between doing such experiments (pretty standard) and actually submitting their results for publication (unethical). This actually shouldn’t be hard to check, Nature has a very open and unequivocal policy on data release, so simply comparing the output of the fudged programs to the published data should be quite clear.

Personally, I don’t really have any stake in the GW debate one way or the other (frankly, I’m surprised that it’s somehow lumped in to one political party versus the other) – I do think these e-mails can shed some light on the process, but only if they’re looked at in an unbiased way.

Triz, INRE your comment:

My other questions is INRE the larger point that Curt made, which is that certain code snippets include very blatant “fudge-factors” and biasing. Have you seen anything to suggest that this code was actually used in the published analysis and not just an experiment?

Examples are readily available in the New Zealand uproar, easily seen when comparing the actual raw data to their published data.

As far as I can tell, the NZ data is an entirely separate issue and is not mentioned in conjunction with this code or even the e-mails in general … what bearing does it have on the accusations Curt made?

As for the NZ data itself, you yourself have explained the corrections as has NIWA (here) and they’re nothing out of the ordinary. I would trust the CSC in their accusations more if this wasn’t a years old issue that they’re now re-hashing to spin-off the CRU affair: when Treadgold is actually pressed he just about admits it. However, if you feel that their corrections were bogus, all of the data in that study was made public and easily accessible, so you should submit a letter to their journal.

Feel free to interpret it for yourself, but it looked to me like everyone will know what each other is doing. I don’t think this was a release-to-public (or publication) data-set, but a “contest” of sorts between their group and others to test their different software code against a given. I have not read the link you provided in any detail yet, (it’s midnight here in Calif, and I’m going to bed soon) but I shall in the morning. I do agree it’s important to test code against a known value, especially if new data-points are being added over time.

I am quite able to accept that I may be wrong in what I think I see here. I may have even got some of the e-mails out of time-sequence and failed to see context correctly, but in the large scheme of all this, it’s very minor one way or the other compared to my other finds.

As to your second question, I have many many instances of them shaping data and code for a desired outcome. Most blatantly for talks and releases to the public, (less so for publication.) But even their methods of review and publication is suspect. (I’ve never heard of being able to choose the reviewers within the pubs, for instance, nor have the editor “help” a submitter along.)

The “process” keeps making me think of sausage…

Read around the site, as I have been posting excerpts here and there. I am in the midst of composing what I hope to be a comprehensive and unbiased look at all of this for a Reader-Post later this week.
-Reading through over a thousand e-mails one at a time is tiring. They initially all came in Notepad format, and I had a bunch of teenagers put them into Word, and add dates for me to be able to sort them easier (an initial culling was done to remove “dumb” ones). I’ve also been through 70 some odd documents, PDF’s and files that are separate from the 12 years worth of internal communication. I am doing this though my “engineers eyes”, and have definitely been trying to stay objective. (I have seen some good science going on within it, too.) In short, this isn’t easy to do in the relativly short time-period I’ve set for myself.

Welcome to the site, but be warned that we don’t hide which side of the “Isle” we prefer. 🙂

The point Curt was making is that the scientists are taking raw data and altering it for their publications to show a decline that is not in the raw data, Trizzlor. That has direct bearing on Curt’s thread here.

Nor was the NZ raw data provided in their publication, and denied for years by Salinger until just a few days ago when a colleague finally released it. Additionally, they simply use “accepted” data manipulation formulas… sans any other reason than it’s there… in order to achieve the incline.

This blind acceptance of unexplainable reasoning for adjustments *is* the problem. The only explanation that can be reached is that is it designed to create warming where there is none… as Treadwell notes in his statement… “…we say what is true: that no warming is evident until after the adjustments.”

Secondly, I provided NIWA’s statements as to the morphed data in my own NZ post. I did *not* say that explanation had an ounce of credibility. It becomes a current issue because, like the CRU, they have steadfastly refused the data until now, and make adjustments merely to show activity that simply doesn’t exist. It’s called a “pattern” and a “trend”… but not in warming, in deception.

Just to add, Trizzlor…. in my NZ post I had put a link to a published paper that was examining the integrity of these various data sets of adjustments using 11 stations in Colorado.

Their bottom line can be summed up in a few paragraphs on pg 15-166 of that 14 page abstract of the over 400 page publication, where they were discussing trying various adjustments.

We attempted to apply the time of observation adjustments using the paper by Karl et al. (1986). The actual implementation of this procedure is very difficult, so, after several discussions with NCDC personnel familiar with the procedure, we chose instead to use the USHCN database to extract the time of observation adjustments applied by NCDC. We explored the time of observation bias and the impact on our results by taking the USHCN adjusted temperature data for 3 month seasons, and subtracted the seasonal means computed from the station data adjusted for all except time of observation changes in order to determine the magnitude of that adjustment. An example is shown here for Holly, Colorado (Figure 1), which had more changes than any other site used in the study.

See document for associated graph

What you would expect to see is a series of step function changes associated with known dates of time of observation changes. However, what you actually see is a combination of step changes and other variability, the causes of which are not all obvious. It appeared to us that editing procedures and procedures for estimating values for missing months resulted in computed monthly temperatures in the USHCN differing from what a user would compute for that same station from averaging the raw data from the Summary of the Day Cooperative Data Set.

This simply points out that when manipulating and attempting to homogenize large data sets, changes can be made in an effort to improve the quality of the data set that may or may not actually accomplish the initial goal.

In other words, this whole adjustment nonsense has pretty much been shown to be an exercise in futility, unstable, and inaccurate. Yet it is “accepted” as corrections… and few, if any, of the pro-AGW papers actually compare the raw data in their publications. They only use the corrected data as their foundation.

Bad juju

Your questioning of the “Challenge” and subsequent link has bugged me. Primarily because of the logical way you put it. So in light of this, I went “a-hunting”, and sure enough, I saw/assumed something that was not “there”. In looking through them again this morning, I went through the messages that preceded that one, by number, not by the dates my “staff” had re-labeled them as (per what’s on the mail-headers). Sure enough, there are a couple out of order, so that the context was not apparent.

I have a big ego, not a fragile one, so here is my correction with a public mea culpa, and it’s associated evidence from E-mail # 1151577820:

On 23.06.2006 19:23, “Caspar Ammann” wrote:

Hi Tim,

just back from the various trips and meetings, most recently
Breckenridge and the CCSM workshop until yesterday. This coincided with
the release of the NRC report…

Thanks Tim for getting in touch with Simon and Eduardo. And I would
think it would be excellent if you would be on the reconstruction side
of things here. We really need to make sure that all the reconstruction
groups (the ones that show up in the spaghetti-graph) also provide
reconstructions for the Challenge. By the way, Mike Mann is fine with
the participation of the german group in this as he has spoken now
favorably on the project.

I think the separation you point at is absolutely crucial. So, as I
indicated in Wengen, I would suggest that we could organize a small
group of modelers to define the concepts of the experiments, and then
make these happen completely disconnected from standard data-centers. A
Pseudo-Proxy group should then develop concepts of how to generate
pseudo-proxy series and tell the modelers where they need what data. But
what they do is not communicated to the modelers. Based

The underlying concept as well as the technical procedure of how we
approach the pseudo-proxies should be made public, so that everybody
knows what we are dealing with. We could do this under the PAGES-CLIVAR
intersection umbrella to better ensure that the groups are held separate
and to give this a more official touch. Below a quick draft, we should
iterate on this and then contact people for the various groups.

So long and have a good trip to Norway,
Caspar

Here a very quick and simple structural draft we can work from: (all
comments welcome, no hesitations to shoot hard!)

Primary Goals:

– cross-verification of various emulations of same reconstruction
technique using same input data
– comparison of skill at various time scales of different techniques if
fed with identical pseudo-proxy data
– sensitivities of hemispheric estimates to noise, network density
– identify skill of resolving regional climate anomalies
– isolate forced from unforced signal
– identify questionable, non-consistent proxies
– modelers try to identify climate parameters and noise structure over
calibration period from pseudo-proxies

Number of experiments:

– available published runs
– available unpublished, or available reordered runs
– CORE EXPERIMENTS OF CHALLENGE: 1-3 brand new experiments
one experiment should look technically realistic: trend in
calibration, and relatively reasonable past (very different phasing)
one experiment should have no trend in calibration at all, but
quite accentuated variations before
one could have relatively realistic structure but contains a
large landuse component (we could actually do some science here…)

Pseudo-Proxies and “instrumental-data”:

– provide CRU-equivallent instrumental data (incl. some noise) that is
degrading in time
– provide annually resolved network of pseudo proxies ((we could even
provide a small set of ~5 very low resolution records with some
additional uncertainty in time))
– 2 networks: one “high” resolution (100 records), one “low” resolution
(20), though only one network available for any single model experiment
to avoid “knowledge-tuning”, or through time separation: first 500-years
only low-red, then second 500-years with both.
– pseudo-proxies vary in representation in climate (temperature, precip,
combination), time (annual, seasonal) and space (grid-point, small region)

Organization of three separate and isolated groups, and first steps:

– Modeler group to decide on concept of target climates, forcing series.
Provide only network information to Proxy-Group (People? Ammann, Zorita,
Tett, Schmidt, Graham, Cobb, Goosse…).
– Pseudo-proxy group to decide on selection of networks, and
representation of individual proxies to mimic somewhat real world
situation, but develop significant noise (blue-white-red) concepts,
non-stationarity, and potential “human disturbance” (People? Brohan,
Schweingruber, Wolff, Thompson, Overpeck/Cole, Huybers, Anderson, …).
– Reconstruction group getting ready for input file structures: netCDF
for “instrumental”, ascii-raw series for pseudo-proxy series. Decide
common metrics and reconstruction targets given theoretical pseudo-proxy
network information. (People: everybody else)

Direct science from this: (important!)

– Forced versus internal variations in climate simulations (Modelers)
– Review and catalog of pseudo-proxy generation: Noise and stationarity
in climate proxy records, problems with potential human/land use
influence (Proxy Group)
– Detection methods and systematic uncertainty estimates (Reconstruction
Group)

Thanks for keeping me on my toes. This topic is serious enough that unlike some of the scientists involved, I intend to wholly accept corrections without taking it personal.

Some entries have no nebulous qualities whatsoever though, other than the caveat that this is from someone “outside” the inner-circle, but still very involved in data-gathering for the “Community” (as the call themselves).

Mail # 0912633188 (personal info deleted/lines bolded by me)

From: Bob Keeland
To: ITRDBFOR
Subject: Re: verification and uniformitarianism
Date: Wed, 2 Dec 1998 16:13:08 -0700
Reply-to: grissino

Frank is correct in that we need to define ‘abrupt climatic change’ or
even just ‘climate change.’

Using Jim’s Schulman Grove example suppose that the area supported a
stand of bristlecone pine 9,000 or more years ago, hence the scattered
remnants. Either a major catastrophic event or a fluctuation in climate
(call it climate change if you want) resulted in conditions that killed
the mature trees and eliminated any further recruitment for up to 1,000
years. This site may be near the limits of recruitment and with a major
(or minor perhaps) change in climate it could easily be beyond the
limits of recruitment. About 8,000 years ago climate again became
favorable for bristlecone pine recruitment and a new stand(s) developed
and have existed ever since. Some or most of the material remaining
from the original stand may be buried down in the valley, or the
original stand may have been small or sparse. The amount of time
between the loss of the original stand and the beginning of the new
stand would depend on the period of unfavorable weather and the amount
of time needed for bristlecone pine to re-invade the area. I am out on
a limb here, so to speak, as I an somewhat ignorant of prehistoric
climate patterns for the area and of bristlecone pine ecology, but this
seems like a relatively reasonable scenario.

I guess that my point is that climate continues to fluctuate within
broad bounds. Everything that we are now calling ‘climate change’ is
well within the bounds observed within the prehistoric record of climate
fluctuationsDo we call any variation ‘climate change’ or should we
limit the term climate change for anything considered to be caused by
humans? To my mind it is not so much what we call it, but rather that
we keep a clear idea of what we actually talking about. .

Bob Keeland
USGS, National Wetlands Research Center
Lafayette, LA

-Notice the Date: AKA: “The hottest year evah!” (sarc)

Allow me add one that is indicative of Jones’ attempts to limit peer-reviewed dissenting opinions. [Emphasis added by me] This link courtesy of a Daniel Taranto WSJ article and the content of the mails found at the site that has sorted some of the archives.

From: Phil Jones
To: “Michael E. Mann”
Subject: HIGHLY CONFIDENTIAL
Date: Thu Jul 8 16:30:16 2004

Mike,
Only have it in the pdf form. FYI ONLY – don’t pass on. Relevant paras are the last 2 in section 4 on p13. As I said it is worded carefully due to Adrian knowing Eugenia for years. He knows the’re wrong, but he succumbed to her almost pleading with him to tone it down as it might affect her proposals in the future ! I didn’t say any of this, so be careful how you use it – if at all. Keep quiet also that you have the pdf.

The attachment is a very good paper – I’ve been pushing Adrian over the last weeks to get it submitted to JGR or J. Climate. The main results are great for CRU and also for ERA-40. The basic message is clear – you have to put enough surface and sonde obs into a model to produce Reanalyses. The jumps when the data input change stand out so clearly. NCEP does many odd things also around sea ice and over snow and ice. The other paper by MM is just garbage – as you knew. De Freitas again. Pielke is also losing all credibility as well by replying to the mad Finn as well – frequently as I see
it.

I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is !
Cheers
Phil

Mike,
For your interest, there is an ECMWF ERA-40 Report coming out soon, which shows that Kalnay and Cai are wrong. It isn’t that strongly worded as the first author is a personal friend of Eugenia. The result is rather hidden in the middle of the report. It isn’t peer review, but a slimmed down version will go to a journal. KC are wrong because the difference between NCEP and real surface temps (CRU) over eastern N. America doesn’t happen with ERA-40. ERA-40 assimilates surface temps (which NCEP didn’t) and doing this makes the agreement with CRU better. Also ERA-40’s trends in the lower atmosphere are all physically consistent where NCEP’s are not – over eastern US.

I can send if you want, but it won’t be out as a report for a couple of months.
Cheers
Phil

Prof. Phil Jones
Climatic Research Unit Telephone +44 (0) 1603 592090
School of Environmental Sciences Fax +44 (0) 1603 507784
University of East Anglia
Norwich Email p.jones@xxxxxxxxx.xxx
NR4 7TJ
UK

@Patvann: Thanks for the candid response; I’m not trying to bust your balls on this, only to say that when you’re hunting through decades of in-the-know emails, a lot of references to “tricks” or “hidden data” or “generated bias” are going to be perfectly explainable false-positives. If anything, this experience has made me re-evaluate how I write e-mails to colleagues and I was surprised to find how many times I use phrases like “hide/rid of the noise” which are perfectly justifiable but look nefarious out of context. Obviously right now this is not of much consequence, but once you put together a final report, I would make a clear distinction between what’s unequivocal and what’s speculation (perhaps literally split into two sections) so that some ass like me doesn’t trash the whole thing because of one misinterpretation.

As to your second question, I have many many instances of them shaping data and code for a desired outcome. Most blatantly for talks and releases to the public, (less so for publication.) But even their methods of review and publication is suspect. (I’ve never heard of being able to choose the reviewers within the pubs, for instance, nor have the editor “help” a submitter along.)

I also come from an engineering background but am working in biology and it is absolutely the norm to attach a letter to the editor with your submission that lists who would make a good or bad reviewer. The justification is that this field requires very specific background knowledge for each concentration, and a standard “biologist” wouldn’t be able to review properly. On the other end, the reviewer gets to see the author list and is supposed to report any conflicts of interest (though I’ve heard tales of reviewers holding up papers because they were submitting something similar). Likewise, super-impact journals like Nature often work with the submitting group to position a paper better or at the right time – I’ve personally seen a colleagues submission held up for an issue so that it could come out back-to-back with another persons work. In short, yes it’s a hot-dog factory, but I really can’t imagine it working as the “benevolent anarchy” that people often assume it is.

On the whole, it seems to me there are two main points you’re trying to make: A) That these scientists are fiercely competitive and political B) That there are forced modifications to the data which improperly effect outcome to promote the warming theory. Point A is not going to be of much surprise to anyone who works in academia and probably won’t make a big splash; examples like the one MataHarley shows look pretty damning, but, judging by the response that Mann and RealClimate had to this passage, their tactic is just to claim lack of context until people stop caring. Point B can actually have an important effect on the field if you make a strong connection between the modified code and the published data – just my two cents, but this is where I would spend most of my time and leave point A as an entertaining fly-on-the-wall addendum.

~~~

@MataHarley: You and I (and the rest of the scientific community) can disagree on weather these adjustments were done properly or not, but adjusting to the mean across multiple sites when the unadjusted correlation is consistent (as you see between Kelburn and Airport) is not an unexplainable assumption unless you want to challenge the paradigm of taking averages altogether. The CSC report that the adjustments from raw data appear to be from a mixture of sources rather than one doesn’t mean anything unless they can show that the sources do not exist (i.e. USHCN cannot provide them). Their charge that the data was fudged and my accusation that angels have been drinking the mercury in the thermometers are equally valid in the absence of direct causal evidence. Taking that unproven accusation and an unrelated snippet of code that no one knows the applications of and claiming a “pattern of deception” is only going to convince the pre-convinced.

EDIT: I just noticed the updated RealClimate post which addresses many of the points you guys bring up – either by saying that the work is old/was never used, or that the published papers actually do discuss the points that the emails purport to hide.

Triz: You and I (and the rest of the scientific community) can disagree on weather these adjustments were done properly or not, but adjusting to the mean across multiple sites when the unadjusted correlation is consistent (as you see between Kelburn and Airport) is not an unexplainable assumption unless you want to challenge the paradigm of taking averages altogether.

And indeed I do, Triz. In my comment #20 above, I point out a study looking specifically at the way of averaging across 11 Colorado stations (also linked in my NIWA post. Please note that NIWA’s “averaging” was across only 7 stations.

This study was a 2001 published paper in the Internation Journal of Climatology, published online in Wiley Interscience, and copyrighted in 2002 by the Royal Meteorological Society. Shall we consider it “peer reviewed”? Well, perhaps not in Phil Jones opinion….

To pull a few of the highlighted excerpts from their conclusions:

…It appeared to us that editing procedures and procedures for estimating values for missing months resulted in computed monthly temperatures in the USHCN differing from what a user would compute for that same station from averaging the raw data from the Summary of the Day Cooperative Data Set.

This simply points out that when manipulating and attempting to homogenize large data sets, changes can be made in an effort to improve the quality of the data set that may or may not actually accomplish the initial goal.

I don’t know how a potential “peer review” document can say any kinder that most of these “adjustments” are totally unpredicatable. Per their findings, shall we consider such corrections absolute?

I think not. And another thing they said specifically… the fewer the stations, the less accurate the averaging adjustments .

However this paper has been concertedly ignored since 2001. Any surprise there? It doesn’t fall into the lockstep mentality the IPCC agenda driven “scientists” want from their colleagues.

UPDATED response, Triz: INRE your comment:

EDIT: I just noticed the updated RealClimate post which addresses many of the points you guys bring up – either by saying that the work is old/was never used, or that the published papers actually do discuss the points that the emails purport to hide.

I particularly love the “doesn’t stand the test of time” and “no one can do that” INRE redefining what is peer-review. If a burglar confesses to attempt B&E, yet doesn’t succeed, is he any less culpable? THe overt attempts to diss any papers of dissent has a long history, and didn’t start with the CRU database. It just proved that it was a desired goal… direct from the donkey’s mouth…. achievable or not.

Quite simply, that attitude has no place in science, let alone in science that carries such an economic/development repercussion with the associated willy-nilly int’l legislation they want to pass. And if you’ll notice, even RC agrees that discussions of destroying materials on the FOIA demand list is unconcionable. However that goes out to the point in my paragraph above… if this is their desired goals, are they to be any less culpable and suspect in their “consenses” science findings?

These two points aptly demonstrate the reasoning for my “pattern of deception” remark. Successful at the deception or not, there is a pattern of attempting or wishing to implement such. And if the science is so all-fired convincing on it’s own, why?

@MataHarley: I think RCs point is that, when the e-mails state that a concern should be “hidden” but the actual publications actually address those concerns directly, it is more likely that the emails were taken out of context than that any devious activity was intended. This seems sound given that the statement “Redefine the peer-reviewed literature!” doesn’t actually make any sense, and the papers discussed actually are bad papers.

I’m sorry, Triz. “Bad papers”? In whose opinion but theirs?

Thank you for the hints/advise. Sticking with the “provable” goes much farther than exaggeration.

After finding out that Realclimate is the DIRECT mouthpiece of Hadley, I doubled the amout of “salt” applied.

@MataHarley: I’ve had a chance to read the paper you mention and I genuinely think that they are attacking a different problem from what is going on in NZ. First, from the abstract:

It is unlikely that one or a few weather stations are representative of regional climate trends, and equally unlikely that regionally projected climate change from coarse-scale general circulation models will accurately portray trends at sub-regional scales. However, the assessment of a group of stations for consistent more qualitative trends (such as the number of days less than −17.8 °C, such as we found) provides a reasonably robust procedure to evaluate climate trends and variability.

The point being that A) A few sub-regional weather stations are not representative of regional trends B) Regional trends cannot be used to infer sub-region scales C) Grouping stations together for qualitative trends is “reasonably robust”. Now, digging into the paper (what fun, by the way) shows that their main claim is that the trends at each site are significantly different from each other (Table II). In other words, Ft. Collins shows a significant increasing trend while Holly shows a significant decreasing trend, so it certainly doesn’t make any sense to just average the two sites together. Or as they put it: “The magnitude of spatial variation in this relatively homogeneous region far exceeds the ‘main effect’ of any average projected climate change.” This is a fair conclusion to reach from the Colorado data.

Now, let’s go back to the NZ plots: compare the distribution between Airport and Kelbourn, these two site are perfectly correlated – every rise and fall in Airport corresponds to an identical one in Kelbourn. This kind of correlation is entirely different from the CO analysis, and it’s perfectly reasonable to normalize the sites to the mean. Moreover, Wellington isn’t some kind of minor outlier – by CSC’s own analysis it is the site with the strongest post-normalization change.

The most egregious fudging, however, is what CSC did in their own report. As detailed in the various critiques, CSC literally took Thorndon and Kelburn, two sites that were at different locations, and treated them as if they were a single “Wellington” distribution. As the previous link explains: “Look again at Treadgold’s graph. He makes no distinction between the blue and green lines — he just joins them up. Temps before the mid-20s were recorded at Thorndon, near sea level, but then the recording station moved to Kelburn at 125 m above sea level.“. Whatever you may think of NIWA’s adjustments, Treagold’s analysis is utterly bogus and clearly creates a false decline across two sites by pretending that they are a single continuous site.

Of course, Treagold has succeed in one thing, and that is to piggy-back off the CRU hack to shed doubt on the completely unrelated NZ data, which in turn creates a “pattern of deception”. In reality, there is yet to be any demonstrative link between the “fudged” CRU experiments and actual published IPCC data.

Triz, perhaps you didn’t read my NIWA post in total. Had you done so, you could have gone direct to CSC’s report and read their words instead of taking the twisted version from an AGW proponent blog site.

Also had you done so, you could check out the various comparison charts that CSC put together. I had only provided Auckland’s of this compiled version of raw, adjusted and adjustment differences in my post. But since you are discussing Wellington, I’ve provided the CSC compiled graph for Wellington below, plus some excerpts from the CSC paper.

As CSC notes, older readings have been adjusted way down, and later readings adjusted way up in order to create a “trend” where there is no warming trend. Now basic adjustments are one thing, but extreme adjustments sans good reason merely to display a non-existant trend are another.

From pg 4 of the CSC paper direct, and not some naysayer blogger….

Six of the seven stations have had their past (pre-1950) data heavily adjusted downwards. In all six cases this significantly increased the overall trend. The trend at the remaining station, Dunedin, was decreased, but the reduction was not as great as the increases in the other six.

This graph helps to picture the differences. Note that, after adjustment, every station shows a warming trend, although, originally, three showed cooling and one (Lincoln) showed no trend. In every case, apart from Dunedin, a warming trend was either created or increased. It is highly unlikely this has happened by mere chance, yet to date Dr Salinger and NIWA refuse to reveal why they did it.

~~~

The following graphs dramatically show the effect of the adjustments NIWA applied to the raw temperature readings. The important thing to note is the difference in the slopes of the two trend lines, unadjusted (blue) and adjusted (red). When the slope becomes a climb, or gets steeper, from left to right it means they created warming or made it stronger.

(Note: Click here for a full size version)

‘splain that wide range of adjustment in the later years, please. If so, you’ll be a step ahead of NIWA, that doens’t give a plausible explanation but to say they use “accepted” adjustment formulas.

Now, INRE your comment:

As detailed in the various critiques, CSC literally took Thorndon and Kelburn, two sites that were at different locations, and treated them as if they were a single “Wellington” distribution.

I have no idea what CSC paper you are discussing. If you search the PDF document I linked to the CSC report (relinked in the 4th paragraph in this comment) in my NIWA post (linked above), you’ll find no references anywhere in the paper to Airport, Kelburn or Thorndon stations. Their paper focused on the unexplanable anomolies of seven stations… =

•Auckland (1853)
• Masterton (1906)
• Wellington (1862)
• Hokitika (1866)
• Nelson (1862)
• Lincoln (1863)
• Dunedin (1852)

What kind of straw man defense is that??

~~~

The 2001 Colorado study revealed the instability of averaging and adjustments in general… that even well intended data manipulation does not accurately portray what the actual raw data readings would be at the station itself.

Thus these are two different issues, but related in the sense that any adjustments applied that are not within logical reason (i.e. two stations with identical trends and similar measurement patterns) results in a questionable dataset.

This is the reason I call *all* adjustments and corrections into question, Triz. How do we know what formula they applied, and why? Most especially in the case of CRU, where the original data was revised and merged into the IPCC database foundation… now conveniently inavailable to check for initial errors.

CSC has pointed out just such unexplanable adjustments, and NIWA has not sufficiently explained them but to say – paraphrased – “hey, it’s what we all use”.

Not good enuf.

I cross posted this from another thread as relevant, Triz.

~~~

[Comment by] mathman

I am a mathematician, and I teach statistics. Allow me to choose from a set of data the values that prove my point, and I can prove anything. Statistics has no validity if there is any tampering with the raw data. Statistics, as a mathematical skill, has as a basis the presumption that the purpose of computation is to infer information from the raw data.

A poll is taken. 60% of the respondents belong to one political party, where that political party represents 45% of the population. This is called a skewed poll, and is of no value.

Any textbook on statistics will tell you this; the text will even tell you the various means of assuring that the sample is truly random and includes no bias.

What I am saying about statistical computations is established fact; even non-mathematicians can verify that sampling and computation must be based on unbiased samples.

I am also a scientist. True science investigates phenomena without pre-ordained results. Government-funded science is by definition an oxymoron; to obtain a grant one must specify in advance the results one will obtain using the grant money. Fulfilling a government grant means making the committee which makes the grants happy, by checking that the conclusion matches the specified goal of the research. This method of investigation will discover nothing.
None of the great scientific discoveries of the past were pre-ordained. All of them were gotten by the acute observer seeing something unexpected and following that unexpected event to a new conclusion. Penicillin. Teflon. Carbon rings. Pasteurization. Relativity. I could go on for pages.

What is so sickening about AGW is the systematic effort to bias the data. To cherry-pick certain trees out of a larger sample, then claim that the trees (selected for their bias) are representative, is fraud.

To discard data because it is not reflective of the desired conclusion is fraud.

To use a high-pass filter to discard low temperatures and include high temperatures is fraud.
To systematically report the disappearing ice around the North Pole (when, in fact, said ice cyclically expands and contracts in its extent) is fraud.

To introduce one’s movie by using a fabricated view of a collapsing ice sheet from a commercial move, presented as real, is fraud.

To publish one’s new book and include as the flyleaf an illustration of the Earth with many hurricanes (airbrushed in, including on the equator) is fraud.

The truly sick part of the story is that the fraud is justified by the nobility of its purpose: one New World Order, the grand dream of Karl Marx imposed upon all, so that all can enjoy living under tyranny.

AGW is not science. AGW is a hoax, a fraud, a vast chicanery foisted off on unsuspecting people, for the intended purpose of world domination.

And that is wrong.

I have no idea what CSC paper you are discussing. If you search the PDF document I linked to the CSC report (relinked in the 4th paragraph in this comment) in my NIWA post (linked above), you’ll find no references anywhere in the paper to Airport, Kelburn or Thorndon stations. Their paper focused on the unexplanable anomolies of seven stations… =

•Auckland (1853)
• Masterton (1906)
• Wellington (1862)
• Hokitika (1866)
• Nelson (1862)
• Lincoln (1863)
• Dunedin (1852)

What kind of straw man defense is that??

You’ve hit on my point exactly. The data that the CSC paper conveniently leaves out is that the Wellington station actually consists of three different stations (Thorndon, Kelburn, and Airport) that have been adjusted to the mean to create one distribution. In his table on “Wellington Temperature anomaly” (pg. 6), Treadgold hides this fact by pretending that the three stations (represented as three colors by NIWA), is actually a single station that, represented by a single blue “NIWA Unadjusted” line by Treadgold. This is absolutely bogus: no one in their right mind would just concatenate three different stations to form one distribution, and without even making a note of it anywhere in the paper. Treadgold is doing exactly what the Colorado paper suggests not to do – blindly merging multiple data points. The jump at 1930 is then easily explained because Thorndon was adjusted down to the mean and Kelburn/Airport were adjusted up/down to the mean and cancel each other out.

’splain that wide range of adjustment in the later years, please. If so, you’ll be a step ahead of NIWA, that doens’t give a plausible explanation but to say they use “accepted” adjustment formulas.

Not to sound like a jerk, but perhaps you should actually read the NIWA explanation. They explicitly document the fact that there are three stations not one; that two of them are extremely well correlated in the overlap; and that their temperature differences are consistent with their altitude differences.

Sorry for the delayed response, trizzlor. Can you say ears and alligators? The chaos that constitutes my life of late…. LOL

INRE Wellington. I was actually under the assumption it was four stations. But perhaps you are correct it’s only three.

You seem to have a problem with the combined “adjustment” for the stations as one. THat’s odd, as NIWA does the same. So it’s only a problem when NZCSC does it? I’m quite sure that’s not what you mean to convey.

NZCSC’s John McLean discussed Wellington in his article yesterday

NIWA gave the example of an adjustment due to a change of location in 1928 from Thorndon, on the northern edge of Wellington city, to Kelburn, about 1km away but 120m higher, which NIWA claimed meant an average of 0.8°C cooler but which appears to be based on simplistic assumptions.

Sure enough this shift appears on the NZCSC’s graph of the difference between original and adjusted Wellington temperatures but so too does an unexplained adjustment that slides in linear fashion across 1910-15 when it totals about 0.3C and a series of unexplained irregular adjustments since 1970. A close check of other graphs of the difference reveals a number of distinct steps that could be associated with a change of station location – as many as four for Lincoln.

It appears that in some cases new and old observation stations operated simultaneously for a period that was perhaps long enough to sensibly calculate the average variation. In other cases though, one station ceased operation and simultaneously another started at a new location, so how was the variation between those sites determined?

Adjustments for station relocations are reflected as a consistent difference between the original and adjusted temperature but several stations also exhibit extended periods of unexplained irregular differences between the original and adjusted temperatures. These bring to mind the “trick” described in the CRU emails but perhaps NIWA can account for the irregularities.

In reality it needs to do more than that if it is to recover any credibility; it needs to fully describe all adjustments to the original data so that its calculations can be independently verified. There’s an enormous difference between no change in average temperature and what it claims is a rise of 0.92°C over the last 100 years, which it blames on human activity and is above the IPCC’s global average.

In short, NIWA’s adjustments are still yet an enigma. And, in fact, NIWA’s David Wratt consistently says that the impact of global warming is likely to be less felt in NZ because it’s surrounded by oceans.

My… that sure contradicts with NIWA’s adjusted graphs showing warming where there is none…

Perhaps this contradiction with the trend in NIWA-adjusted temperature data can be resolved by the graph of NIWA-adjust temperature for New Zealand in which it is clear that the overall trend is largely driven by temperature change between 1850 and 1950, with a flat trend in later years. The IPCC’s 2007 report said that human activity had very little influence on temperatures prior to 1950 and far more influence after that year, which leaves NIWA the difficult task of explaining its belief that human activity has driven the increase in what it claims is New Zealand average temperature.

Coming on top of Climategate and the finding by Anthony Watts that 80% of temperature observation stations in the USA are not sited in accordance with quality standards but often near local heat sources, this claim by the NZCSC further raises the question of the accuracy of temperature monitoring and its subsequent processing.

And this comes back to my main point. The accuracy of both monitors and the adjustment models/formulae need to be questioned. Where the measurements are taken and the methods of adjustments… these methods NOT be “peer-revied” but simply given a pass… are all highly suspicious. There’s no more “trust me” granted in light of the emergence of these corrupt sideshows in the backrooms.

And yes I “read” NIWA’s response and – as a matter of fact – had provided the most current of that at the time of my NIWA post. I suppose what you want is for me to read it, and blindly accept their commentary sans suspicion. This despite their repeated refusals to respond to NZCSC’s documented requests for clarifications on their select adjustments.

Therefore yup… you sounded like a jerk.

One more Wellington/correct piece for you to mull over, triz. And let me say that I do consider you a well reasoned commenter. You and I have had out bouts in the past over group vs individual insurance, and the pre’existing conditions bit. You did a bit of research and learned something new. So I like that you are open to things when presented well.

So let me add one of Anthony Watt’s comments INRE Wellington’s bizarre adjustments.

With no overlap of continuous temperature readings from both sites, there is no way to truly know how temperatures should be properly adjusted to compensate for the location shift.

Wratt told Investigate earlier there was international agreement on how to make temperature adjustments, and in the news release tonight he elaborates on that:

“Thus, if one measurement station is closed (or data missing for a period), it is acceptable to replace it with another nearby site provided an adjustment is made to the average temperature difference between the sites.”

Except, except, it all hinges on the quality of the reasoning that goes into making that adjustment. If it were me, I would have slung up a temperature station in the disused location again and worked out over a year the average offset between Thorndon and Kelburn. It’s not perfect, after all we are talking about a switch in 1928, but it would be something. But NIWA didn’t do that.

Instead, as their news release records, they simply guessed that the readings taken at Wellington Airport would be similar to Thorndon, simply because both sites are only a few metres above sea level.

Airport records temps about 0.79C above Kelburn on average, so NIWA simply said to themselves, “that’ll do” and made the Airport/Kelburn offset the official offset for Thorndon/Kelburn as well, even though no comparison study of the latter scenario has ever been done.

Here’s the raw data, from NIWA tonight, illustrating temp readings at their three Wellington locations since 1900:

What’s interesting is that if you leave Kelburn out of the equation, Thorndon in 1910 is not far below Airport 2010. Perhaps that gave NIWA some confidence that the two locations were equivalent, but I’m betting Thorndon a hundred years ago was very different from an international airport now.

Nonetheless, NIWA took its one-size-fits all “adjustment and altered Thordon and the Airport to match Kelburn for the sake of the data on their website and for official climate purposes.

In their own words, NIWA describe their logic thus.

Where there is an overlap in time between two records (such as Wellington Airport and Kelburn), it is a simple matter to calculate the average offset and adjust one site relative to the other.
Wellington Airport is +0.79°C warmer than Kelburn, which matches well with measurements in many parts of the world for how rapidly temperature decreases with altitude.
Thorndon (closed 31 Dec 1927) has no overlap with Kelburn (opened 1 Jan 1928). For the purpose of illustration, we have applied the same offset to Thorndon as was calculated for the Airport.
The final “adjusted” temperature curve is used to draw inferences about Wellington temperature change over the 20th century. The records must be adjusted for the change to a different Wellington location
Now, it may be that there was a good and obvious reason to adjust Wellington temps. My question remains, however: is applying a temperature example from 15km away in a different climate zone a valid way of rearranging historical data?

And my other question to David Wratt also remains: we’d all like to see the metholdology and reasoning behind adjustments on all the other sites as well.

Exactly. Or as Cuba Gooding Jr. put in in Jerry McQuire… “show me the money!”

If there is any quasi-logic behind their adjustments, it should be disclosed. Using the “accepted” formula (of which such formula have never been peer-reviewed, BTW) just doesn’t cut the truth meter mustard.

Sorry for the delayed response, trizzlor. Can you say ears and alligators? The chaos that constitutes my life of late…. LOL

I can understand that humoring a comment thread isn’t your top priority on Thanksgiving weekend so thanks for whatever time you can give me … it’s been very useful.

You seem to have a problem with the combined “adjustment” for the stations as one. THat’s odd, as NIWA does the same. So it’s only a problem when NZCSC does it? I’m quite sure that’s not what you mean to convey.

This is cute 🙂 my problem is that Treadgold omits the fact that he’s concatenating three distributions into one to make a trend while NIWA makes no such omission and explains how they’re doing the adjustment. Whatever we may come to think of the NIWA data, CSC is either purposefully misrepresenting the raw data or doesn’t know what the heck they’re doing.

Now, the rest of your links are essentially accusing NIWA of keeping their adjustments an “enigma” and just releasing the one toy example from Wellington. Salinger has authored dozens of papers so it’s hard for me to believe that none of them detail the merging methodology … and, in fact, NIWA has a very thorough follow-up which explains that the methodology has been explicitly detailed in two papers (1992, 1993). As luck would have it, I can’t get at the critical paper Rhoades, D.A. and Salinger, M.J., 1993: Adjustment of temperature and rainfall measurements for site changes. International Journal of Climatology 13, 899 – 913. through Wiley, but I’ve put in a request. The paper I could get at discusses the results and summarizes the methodology as follows:

For the more detailed New Zealand study, data from 20 New Zealand sites were used. Station histories were prepared (Collen, 1992; Fouhy et al., 1992) from which the omogeneity of the temperature records could be assessed.

The next procedure was to carefully homogenise the temperature series that were as complete as possible from each of the selected climate sites. The methodology of Rhoades and Salinger (1993) was used as this provides a procedure for adjustment of temperature series for sites where no neighbour stations exist for comparisons. Many of the island sites in the South Pacific have no neighbour stations, especially in their earlier years of record. In all cases where adjustments have been made, the data from any earlier site was adjusted to that of the current temperature recording location.

To summarise the temporal temperature trends over such a vast area of the Pacific, two approaches were used to define areas that share coherent temperature anomalies. The first, cluster analysis using hierarchical agglomerative techniques (Willmott, 1978) was used to group stations into clusters based on degree of association from annual values of temperature. Principal component analysis was the second methodological approach employed (Salinger, 1980a, b). As the purpose was to investigate the spatial distribution of interannual temperature anomalies, the principal components were rotated orthogonally by the varimax criterion so as to produce components that delineate separate groups of highly intercorrelated stations.

This isn’t a trivial analysis, obviously, but the underlying techniques (hierarchical clustering & PCA) are not at all unusual to be applied here specifically for avoiding problems detailed by the Colorado paper. I’m not going to draw hard conclusions until I get the 1993 paper in the next few days, but the methodology certainly didn’t just come out of thin air. At the very least, real climatologists like Treadgold and Watt should be making their criticisms with regards to the literature, not CSC or NIWAs exaggerated/simplified press releases.

Lastly, I think the most interesting aspect of the NIWA release is the following:

(b) measurements from climate stations which have never been shifted

Dr Jim Salinger has identified from the NIWA climate archive a set of 11 stations with long records where there have been no significant site changes. When the annual temperatures from all of these sites are averaged to form a temperature series for New Zealand, the best-fit linear trend is a warming of 1°C from 1931 to 2008. We will be placing more information about this on the web later this week.

We’ll wait and see, though I’m sure the debate will just shift to how these these stations were hand-picked…