The Intergovernmental Panel on Climate Change warned us we would have less snow and rain.

From the London Times Dec 3 2003 titled “Ski resorts face ruin as snow disappears”

The report considers two possible scenarios for global warming over the next 50 years, both based on predictions made by the Intergovernmental Panel on Climate Change.

Under the Echam model, temperatures will rise by 1C to 3C (37.4F) by 2050, with few changes in precipitation.

The Canadian climate change (CCC) model assumes a similar increase in temperature, although it is weighted towards the top end of the range. It also involves a fall in overall precipitation.

The Echam scenario would raise the average level at which snowfall is reliable from 1,200m to 1,500m across the Alps.

The CCC predictions are even bleaker, with the snow line rising to 1,800m.

Either outcome would threaten a resort such as Kitzbühel, where the village stands at an altitude of 760m and the highest lifts reach just 2,000m.

A little snow would remain at the top of the mountain, but most slopes would be short and bare.

Dr Bürki said: “We could be talking about half the resorts in Europe.

The above are predictions from the Intergovernmental Panel on Climate Change. Now the Global Warmers are claiming the complete opposite of what they claimed in 2003. They are claiming Global Warming will cause more snow.

The Independent wrote in March 2000

Snowfalls are now just a thing of the past

But now Al Gore writes on his website about record snowfalls all over the world.

“As it turns out, the scientific community has been addressing this particular question for some time now and they say that increased heavy snowfalls are completely consistent with what they have been predicting as a consequence of man-made global warming.”

How quickly the Global Warmers change their claims.


About thickmudd

A proud conservative
This entry was posted in Climate Change, Enviroment, Global Warming, Green, science and tagged , , . Bookmark the permalink.

20 Responses to The Intergovernmental Panel on Climate Change warned us we would have less snow and rain.

  1. Ron Broberg says:

    If you want to know what the IPCC said, you should RTFM

    … Snow cover is projected to decrease. Widespread increases in thaw depth are projected to occur over most permafrost regions. {10.3} …

    … Models suggest that changes in mean precipitation amount, even where robust, will rise above natural variability more slowly than the temperature signal. {10.3, 11.1}

    Available research indicates a tendency for an increase in heavy daily rainfall events in many regions, including some in which the mean rainfall is projected to decrease. In the latter cases, the rainfall decrease is often attributable to a reduction in the number of rain days rather than the intensity of rain when it occurs. {11.2–11.9}

    Numbers refer to links in the IPCC AR4 report. Note that ‘snow cover’ is not equivalent to ‘snowfall’, just as area is not equivalent to volume.

    • thickmudd says:

      The sources I posted from are from 2000 and 2003. I will look for the 2000 study and post a link.
      But your source is guilty of calling it both ways. It calls for more rain in some regions less rain in others. So no matter what happens they can claim to be correct..

  2. Christine says:

    Your comments betray a lack of scientific background, as with many others who attack the 1000 page IPCC document. Scientists don’t check the entrails of chickens to see whether or not there will be snowfall next year. They make an educated guess based on the best knowledge available at the time, and then continue to evaluate the data as it comes in.
    Only a fool, or a knave, would look at the IPCC document and conclude that it is pronouncing certainties rather than predicting trends. Which one are you?

    • thickmudd says:

      A lot of claims made in IPCC document have been proven false. And proven to have been not even fact checked.
      Like the IPCC’s Himalayan Glacier ‘Mistake’ which they knew there was no data to back it up.

      And Mann’s hockey stick which used a fake chart. And which used tree ring data before 1960’s and measured data after 1960’s because they couldn’t make tree ring data work after 1960. Yet we are told tree ring data is accurate 2000 years ago.

      • Ron Broberg says:

        You are confusing WMO cover art with Mann’s 1999 reconstruction. The latter was used in the IPCC TAR. But by 2007, multiple 1000 year reconstructions were used. As I said RTFM.

        Read the original sources. Depend less on other people’s framing the ‘issues.’ Dare to be an original thinker, not just a blog parrot.

      • thickmudd says:

        Directly from the leaked climate emails showing how poorly the Global warmers documented their research.
        In April 2003, we requested from Mann the FTP location of the dataset used in MBH98.
        Mann advised me that he was unable to recall the location of this dataset and referred
        the request to Rutherford. Rutherford eventually directed us to a file (pcproxy.txt)
        located at a URL at Manns FTP site. In using this data file, we noticed numerous
        problems with it, not least with the principal component series. We sought specific
        confirmation from Mann that this dataset was the one used in MBH98; Mann said that he
        was too busy to respond to this or any other inquiry. Because of the many problems in
        this data set, we undertook a complete new re-collation of the data, using the list of
        data sources in the SI to MBH98 and using original archived versions wherever possible.
        After publication of McIntyre and McKitrick [2003], Mann said that dataset at his FTP
        site to which we had been referred was an incorrect version of the data and that this
        version had been prepared especially for me; through a blog, he provided a new URL which
        he now claimed to contain the correct data set. The file creation date of the incorrect
        version was in 2002, long prior to my first request for data, clearly disproving his
        assertion that it was prepared in response to my request. Mann and/or Rutherford then
        deleted this incorrect version with its date evidence from his FTP site.

        It is false and misleading for Rutherford et al. to now allege that we used the wrong
        dataset. We used the dataset they directed us to at their FTP site

    • thickmudd says:

      The so called scientists who made that report showed a lack of science when they tried to hide their data from those who where interested in checking their claims. Hiding data because someone might prove you wrong is not science.
      And you are aware that computer models are not data. And the leaked emails show they had to tweet the computer models to force it to show warming.
      And the computer models are poorly written and documented. That is not science.

      • Ron Broberg says:

        “Hide the decline” referred to a piece of cover art on an WMO chart. It had jack-sh* to do with the IPCC reports.

        I doubt that you have looked at any modeling code in your life, much less climate models. You are simply parroting lines fed to you.

      • thickmudd says:

        The leaked climate emails show the programmers sent emails saying the code they use is poorly documented.” In addition to e-mail messages, the roughly 3,600 leaked documents posted on sites including and include computer code and a description of how an unfortunate programmer named “Harry” — possibly the CRU’s Ian “Harry” Harris — was tasked with resuscitating and updating a key temperature database that proved to be problematic. Some excerpts from what appear to be his notes, emphasis added:

        I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation – apparently linear as well. As far as I can see, this renders the station counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective – since we’re using an off-the-shelf product that isn’t documented sufficiently to say that. Why this wasn’t coded up in Fortran I don’t know – time pressures perhaps? Was too much effort expended on homogenisation, that there wasn’t enough time to write a gridding procedure? Of course, it’s too late for me to fix it too. Meh.

        I am very sorry to report that the rest of the databases seem to be in nearly as poor a state as Australia was. There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight… So, we can have a proper result, but only by including a load of garbage!

        One thing that’s unsettling is that many of the assigned WMo codes for Canadian stations do not return any hits with a web search. Usually the country’s met office, or at least the Weather Underground, show up – but for these stations, nothing at all. Makes me wonder if these are long-discontinued, or were even invented somewhere other than Canada!

        Knowing how long it takes to debug this suite – the experiment endeth here. The option (like all the anomdtb options) is totally undocumented so we’ll never know what we lost. 22. Right, time to stop pussyfooting around the niceties of Tim’s labyrinthine software suites – let’s have a go at producing CRU TS 3.0! since failing to do that will be the definitive failure of the entire project.

        Ulp! I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more. So what the hell can I do about all these duplicate stations?…

        As the leaked messages, and especially the HARRY_READ_ME.txt file, found their way around technical circles, two things happened: first, programmers unaffiliated with East Anglia started taking a close look at the quality of the CRU’s code, and second, they began to feel sympathetic for anyone who had to spend three years (including working weekends) trying to make sense of code that appeared to be undocumented and buggy, while representing the core of CRU’s climate model.

        One programmer highlighted the error of relying on computer code that, if it generates an error message, continues as if nothing untoward ever occurred. Another debugged the code by pointing out why the output of a calculation that should always generate a positive number was incorrectly generating a negative one. A third concluded: “I feel for this guy. He’s obviously spent years trying to get data from undocumented and completely messy sources.”

        Programmer-written comments inserted into CRU’s Fortran code have drawn fire as well. The file says: “Apply a VERY ARTIFICAL correction for decline!!” and “APPLY ARTIFICIAL CORRECTION.” Another,, says: “Low pass filtering at century and longer time scales never gets rid of the trend – so eventually I start to scale down the 120-yr low pass time series to mimic the effect of removing/adding longer time scales!”

      • thickmudd says:

        Here is an email from phil jones himself saying the code he used is undocumented.
        Original Filename: 1114607213.txt | Return to the index page | Permalink | Later Emails

        From: Phil Jones

        Date: Wed Apr 27 09:06:xxx xxxx xxxx

        Presumably you’ve seen all this – the forwarded email from Tim. I got this email from
        McIntyre a few days ago. As far as I’m concerned he has the data – sent ages ago. I’ll
        tell him this, but that’s all – no code. If I can find it, it is likely to be hundreds of
        lines of
        uncommented fortran ! I recall the program did a lot more that just average the series.
        I know why he can’t replicate the results early on – it is because there was a variance
        correction for fewer series.
        See you in Bern.

      • thickmudd says:

        from the leaked climate emails
        “X-Mailer: QUALCOMM Windows Eudora Version
        Date: Tue, 26 Apr 2005 13:28:53 +0100
        To: Phil Jones

        ,”Keith Briffa”
        From: Tim Osborn
        Keith and Phil,
        you both feature in the latest issue of CCNet:

        Steve Verdon, Outside the Beltway, 25 April 2005
        A new paper ([3] xxxx xxxx.pdf) from the St. Luis
        Federal Reserve Bank has an interesting paer on how important it is to archive not only
        the data but the code for empirical papers. While the article looks mainly at economic
        research there is also a lesson to be drawn from this paper about the current state of
        research for global warming/climate change. One of the hallmarks of scientific research
        is that the results can be replicable. Without this, the results shouldn’t be considered
        valid let alone used for making policy.
        Ideally, investigators should be willing to share their data and programs so as to
        encourage other investigators to replicate and/or expand on their results.3 Such
        behavior allows science to move forward in a Kuhn-style linear fashion, with each
        generation seeing further from the shoulders of the previous generation.4 At a minimum,
        the results of an endeavor-if it is to be labeled “scientific”-should be replicable,
        i.e., another researcher using the same methods should be able to reach the same result.
        In the case of applied economics using econometric software, this means that another
        researcher using the same data and the same computer software should achieve the same
        However, this is precisely the problem that Steven McIntyre and Ross McKitrick have run
        into since looking into the methodology used by Mann, Hughes and Bradely (1998) (MBH98),
        the paper that came up with the famous “hockey stick” for temperature reconstructions.
        For example, this post here shows that McIntyre was prevented from accessing Mann’s FTP
        site. This is supposedly a public site where interested researchers can download not
        only the source code, but also the data. This kind of behavior by Mann et. al. is simply
        unscientific and also rather suspicious. Why lock out a researcher who is trying to
        verify your results…do you have something to hide professors Mann, Bradley and Huges?
        Not only has this been a problem has this been a problem for McIntyre with regards to
        MBH98, but other studies as well. This post at Climate Audit shows that this problem is
        actually quite serious.
        Crowley and Lowery (2000)
        After nearly a year and over 25 emails, Crowley said in mid-October that he has
        misplaced the original data and could only find transformed and smoothed versions. This
        makes proper data checking impossible, but I’m planning to do what I can with what he
        sent. Do I need to comment on my attitude to the original data being “misplaced”?
        Briffa et al. (2001)
        There is no listing of sites in the article or SI (despite JGR policies requiring
        citations be limited to publicly archived data). Briffa has refused to respond to any
        requests for data. None of these guys have the least interest in some one going through
        their data and seem to hoping that the demands wither away. I don’t see how any policy
        reliance can be made on this paper with no available data.
        Esper et al. (2002)
        This paper is usually thought to show much more variation than the hockey stick. Esper
        has listed the sites used, but most of them are not archived. Esper has not responded to
        any requests for data. ‘
        Jones and Mann (2003); Mann and Jones (2004)
        Phil Jones sent me data for these studies in July 2004, but did not have the weights
        used in the calculations, which Mann had. Jones thought that the weights did not matter,
        but I have found differently. I’ve tried a few times to get the weights, but so far have
        been unsuccessful. My surmise is that the weighting in these papers is based on
        correlations to local temperature, as opposed to MBH98-MBH99 where the weightings are
        based on correlations to the temperature PC1 (but this is just speculation right now.)
        The papers do not describe the methods in sufficient detail to permit replication.
        Jacoby and d’Arrigo (northern treeline)
        I’ve got something quite interesting in progress here. If you look at the original 1989
        paper, you will see that Jacoby “cherry-picked” the 10 “most temperature-sensitive”
        sites from 36 studied. I’ve done simulations to emulate cherry-picking from persistent
        red noise and consistently get hockey stick shaped series, with the Jacoby northern
        treeline reconstruction being indistinguishable from simulated hockey sticks. The other
        26 sites have not been archived. I’ve written to Climatic Change to get them to
        intervene in getting the data. Jacoby has refused to provide the data. He says that his
        research is “mission-oriented” and, as an ex-marine, he is only interested in a “few
        good” series.
        Jacoby has also carried out updated studies on the Gasp”

      • Ron Broberg says:

        The leaked climate emails show the programmers sent emails saying the code they use is poorly documented.

        None of the stolen emails had any comments about computer models. So now you are playing ‘move the goalposts.’

        You make a comment. I point out how your comment is wrong. You point to some evidence that has little or nothing to do with your original comment. You have do not have a working knowledge of either the subject matter – climate science – or even of the the critical talking points. Study harder.

      • thickmudd says:
        2. After considerable searching, identified the latest database files for


        (yes.. that is a directory beginning with ‘+’!)

        3. Successfully ran anomdtb.f90 to produce anomaly files (as per item 7
        in the ‘_READ_ME.txt’ file). Had to make some changes to allow for the
        move back to alphas (different field length from the ‘wc -l’ command).

        4. Successfully ran the IDL regridding routine
        (why IDL?! Why not F90?!) to produce ‘.glo’ files.

        5. Currently trying to convert .glo files to .grim files so that we can
        compare with previous output. However the progam suite headed by
        globulk.f90 is not playing nicely – problems with it expecting a defunct
        file system (all path widths were 80ch, have been globally changed to 160ch)
        and also no guidance on which reference files to choose. It also doesn’t
        seem to like files being in any directory other than the current one!!

        6. Temporarily abandoned 5., getting closer but there’s always another
        problem to be evaded. Instead, will try using rawtogrim.f90 to convert
        straight to GRIM. This will include non-land cells but for comparison
        purposes that shouldn’t be a big problem… [edit] noo, that’s not gonna
        work either, it asks for a ‘template grim filepath’, no idea what it wants
        (as usual) and a serach for files with ‘grim’ or ‘template’ in them does
        not bear useful fruit. As per usual. Giving up on this approach altogether.

        7. Removed 4-line header from a couple of .glo files and loaded them into
        Matlab. Reshaped to 360r x 720c and plotted; looks OK for global temp
        (anomalies) data. Deduce that .glo files, after the header, contain data
        taken row-by-row starting with the Northernmost, and presented as ‘8E12.4’.
        The grid is from -180 to +180 rather than 0 to 360.
        This should allow us to deduce the meaning of the co-ordinate pairs used to
        describe each cell in a .grim file (we know the first number is the lon or
        column, the second the lat or row – but which way up are the latitudes? And
        where do the longitudes break?
        There is another problem: the values are anomalies, wheras the ‘public’
        .grim files are actual values. So Tim’s explanations (in _READ_ME.txt) are

        8. Had a hunt and found an identically-named temperature database file which
        did include normals lines at the start of every station. How handy – naming
        two different files with exactly the same name and relying on their location
        to differentiate! Aaarrgghh!! Re-ran anomdtb:

      • thickmudd says:
        ..not good! Tried recompiling for uealogin1.. AARGGHHH!!! Tim’s
        code is not ‘good’ enough for bloody Sun!! Pages of warnings and
        27 errors! (full results in ‘anomdtb.uealogin1.compile.results’).

      • thickmudd says:
        19. Here is a little puzzle. If the latest precipitation database file
        contained a fatal data error (see 17. above), then surely it has been
        altered since Tim last used it to produce the precipitation grids? But
        if that’s the case, why is it dated so early? Here are the dates:

        – directory date is 23 Dec 2003

        – directory date is 22 Jan 2004 (original date not preserved in zipped file)
        – internal (header) date is also ‘22.01.2004 at 17:57’

        So what’s going on? I don’t see how the ‘final’ precip file can have been
        produced from the ‘final’ precipitation database, even though the dates
        imply that. The obvious conclusion is that the precip file must have been
        produced before 23 Dec 2003, and then redated (to match others?) in Jan 04.

      • thickmudd says:
        So, we face a situation where some synthetics are built with 0.5-degree
        normals, and others are built with 2.5-degree normals. I can find no
        documentation of this. There are ‘*’ versions of the frs and rd0
        programs, both of which use 2.5-degree normals, however they are dated
        Jan 2004, and Tim’s Read_Me (which refers to the ‘*’ 0.5-degree
        versions) is dated end March 2004, so we have to assume these are his
        best suggestions.

      • thickmudd says:
        Bear in mind that there is no working synthetic method for cloud, because Mark New
        lost the coefficients file and never found it again (despite searching on tape
        archives at UEA) and never recreated it. This hasn’t mattered too much, because
        the synthetic cloud grids had not been discarded for 1901-95, and after 1995
        sunshine data is used instead of cloud data anyway.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s