UPDATE: Stewart, in the comments, makes an excellent point:
“There is a great analogy with the development of hydrological models in the 1960′s (because we could automate computation) – in 2013, we are still unable to simulate process accurately – it doesn’t stop us from building the models with increasing complexity which many then blindly believe however the programmer has decided to represent individual processes…“
In 1979, personal computers looked like this.
In 2013, you carry around a supercomputer in your pocket (a smartphone), with the processing power of a warehouse full of TI 99s, and millions of times the 16k storage capacity.
Such is the speed of progress in computer technology. How has climate science fared by comparison?
In climate, the only number that really matters is the sensitivity of the climate to a doubling of CO2. Normally, over a period of years, greater understanding, better modelling and greater computing power will reduce the margins of error as the theories become more finely tuned.
So how has the IPCC done, after 34 years and billions of taxpayer dollars? The following plot shows the range of climate sensitivity since the Charney Report of 1979, and then through the IPCC’s FAR, SAR, TAR, AR4 and AR5:
As as you can see, despite a slight narrowing of the range in AR4, the precision of the sensitivity value hasn’t improved at all from 1979 to today. Not one bit. Nada. Zip. Zilch. Zero. Despite billions of dollars of taxpayers’ hard earned cash, thousands of scientists and years of research, the entire climate science community has failed to improve on the original estimate for climate sensitivity made 34 years ago.
Prof Nir Shaviv writes:
if the basic premises of a theory are wrong, then there is no improved agreement as more data is collected. In fact, it is usually the opposite that takes place, the disagreement increases. In other words, the above behavior reflects the fact that the IPCC and alike are captives of a wrong conception.
Full story here.
(h/t Lubos)
Recent Comments