Whither now, Ernesto? Global warming, anyone?
By John D. Turner
2 Sep 2006

First, it was going to strike Mexico. Then it looked like Houston was under the gun. Day by day it slipped further east. Perhaps New Orleans would once again be a target; nearing the 1 year anniversary of Hurricane Katrina, the media salivated over that one. More opportunity to bash on Bush.

Then it was projected to cross Cuba, enter the Gulf of Mexico, and strike Florida from the west. But wait, then it was projected to cross Cuba, strike the Florida Keys, and tour up the east coast of the United States, just off shore. Finally, it was expected to cross the Keys and up through eastern Florida, exit into the Atlantic, cross through North and South Carolina before then proceeding up the eastern seaboard.

Additionally, it was fully expected to strengthen back to at least a category 1 storm before hitting Florida, then strengthen again over the Atlantic back to category 1, possibly even category 2 before hitting South Carolina.

Early Wednesday morning, now tropical storm Ernesto “slammed” into south Florida packing sustained winds of 45 mph – barely a tropical storm. By noon it had been downgraded to a tropical depression. It did build again in the Atlantic before hitting South Carolina – but only to tropical storm strength, just shy of becoming a weak Category 1 hurricane. The jury is still out on the rest of the predictions, but it is worthy of note that the end of the storm’s projected path has for days been whipping around up north like a beheaded snake.

All this is based on computer modeling.

It turns out that there are multiple computer models, all of them predicting different results. I guess that if you generate enough results, one of them might turn out to be correct.

As you might surmise, this is not an exact science.

So then, I ask, knowing that with the current state of computer modeling and knowledge of atmospheric processes, we can’t reliably predict the path of a major storm from day to day, why should I be confident that we can predict weather conditions for the entire planet fifty years or so in the future?

That’s what those concerned about global warming are purporting to do.

According to many on the left, it isn’t a question of “if” any more; Global Warming is scientific fact. Along with all the dire predictions that accompany it.

Based on computer models, of course.

Now, I happen to be a computer scientist. I have been working with computers and writing programs for over 20 years. I even have some computer modeling experience. I understand the principle of GIGO – Garbage In, Garbage Out.

Many people do not. For many, if a result is output by a computer, it must be correct. Computers are also convenient whipping boys. If we do happen to notice that a piece of output is incorrect, the usual response is that “the computer made a mistake”.

Guess what. Computers do not make mistakes. People make mistakes. In general, errors in computer output can be traced to two problems; errors in the data input to the computer (GIGO errors), or errors in the program code the computer is running.

Programming errors come in various flavors. One kind is when a programmer mistypes something in the program code. This is known as a syntax error. This type of error is similar to what you see when your word processor “flags” a word as being improperly spelled. In computer programs that are compiled, these errors are caught early on and do not make it into the final program.

Sometimes though, the programmer mistypes something correctly; that is, they put something into the program code that should not be there, but does not trigger a syntax error, because it is syntactically correct. This is similar to when you mistype a word but the mistyped word actually is a real word, so the word processor does not catch the fact that you spelled the word you meant to spell incorrectly. This type of error is a semantic error. These types of errors can be hard to find because they don’t trigger a specific problem at compile time. They only show up when the code is executed – by producing incorrect results. The programmer then has to figure out exactly why the results are incorrect, by going through the code manually to try and find the error. For a long program with thousands of lines of code, this can be a time consuming process.

Another type of semantic error is a logic error. These happen during the coding process when a programmer simply codes the wrong thing. They meant to do one thing if a condition was true, for example, but instead did it when the condition was false. Or perhaps they set up a condition that for whatever reason will not actually occur because they misunderstood the data or requirements. As you might imagine, logic errors are also very hard to find.

There are other types of errors that occur when dealing with numbers, such as precision errors, rounding errors, and the fact that not all numbers that can be represented in base 10 can be exactly represented in a fixed number of base 2, or binary, digits, which is what a computer uses to do its calculations. These types of errors, while small, can become quite large over the course of millions of calculations. Fortunately, there are ways to correct for them.

There are many other kinds of errors that can occur in programming. In fact, there are so many different ways to make mistakes writing a program that it is guaranteed that any program of significant size contains many such errors. A large number of them are mere annoyances. Others are more significant. Some lie in wait to ambush you, but if you never take the path they lie on, you will never encounter them. Others are drastic enough to crash the entire program, computer system, or sometimes even the entire computer network. I attended a lecture once where the speaker stated that based on the size of the program code, empirical evidence suggested that Microsoft Windows contained over 65,000 unresolved programming errors. And this was back in 1996! One wag noted that “if builders built houses the way programmers write software, then the first woodpecker that came around would destroy civilization”.

Then, there is another kind of error. I will call this one a “lack of knowledge” error. This type of error is much more profound, because it strikes at the basic reliability of the computer model itself. It follows from the fact that like any computer program, a computer model does exactly what you tell it to do; no more, no less. These models are deterministic. If you put in the same input data, you get the same output data. Were this not so, they would be of little value, since the output would be meaningless as a prediction tool.

A computer program is a set of instructions to the computer on how to solve a particular problem. It isn’t magic. If you do not know how to solve the problem manually, you cannot program a computer to solve the problem for you. Only in Hollywood scriptwriter’s fervid imaginations can you sit in front of a computer, type in any question under the sun, and get a meaningful answer.

This means that, aside from all the potential errors that can occur during the writing of a computer model, you can only model what you know or what you think you know. As your knowledge of the subject changes, your computer model has to change as well. If there are things that affect what you are modeling of which you are unaware, then those things will not appear in your model, since if you don’t know what they are, you can’t program them in. Likewise, if there are things you think you understand, but you really don’t, your model will reflect your understanding of the process, not physical reality.

Obviously, if you can’t program what you don’t know about, your model will be incorrect (and generate incorrect results) if something that you don’t know about actually exists in the real world and influences the process in the real world that you are trying to model.

Finally, there are whole series of what we call “intractable problems”. These are problems that have solutions, but which we currently do not have the capability to solve with the current state of the art of digital computer systems. Usually this is because the problem is extremely complex, and the time it would take to calculate the result, while finite, is so long as to make the entire exercise practically, economically, or physically infeasible.

Guess what? Global weather conditions tend to fall under the category of extremely complex and incompletely understood. There are literally thousands, perhaps millions of variables that affect global weather patterns; complex interactions that are incompletely understood or totally unknown. There are cycles that occur within our biosphere which we have not yet observed, primarily because 1) we don’t know what we are looking for, and 2) we haven’t been observing long enough yet to actually perceive the cycle.

How do we handle intractable problems? Two ways. First, ignore them until the technology available allows them to be addressed. Second, simplify the problem to the point where we can deal with it using available technology. Simplifying the problem means that while we won’t get the exact answer, we will get one that we believe is close enough that we can live with it for the time being.

This technique is used frequently in engineering. A friend of mine is a mathematician. This used to drive him crazy in college. I would be working on a diffy-Q problem (solving differential equations) and he would express an interest in what I was doing. I would show him the problem, and the response was typically “hey JT, that particular problem is unsolvable”. My response would be, no it isn’t. I would proceed to “solve” it by throwing away terms that were insignificant to the final solution, and then use approximations to obtain the final result. This of course, would horrify my friend, as, of course, it was not “exact”. True, but it was “good enough” for the purpose of solving that particular engineering problem, where exactness down to the gnat’s whisker wasn’t required. Practical applications can be quite messy compared to theoretical results. You enjoy the sausage much more if you don’t watch it being made.

This is fine in an engineering application, where you are working with physical quantities whose properties are well understood. The calculations show that, despite the simplification you performed, the beam will indeed support the load required, and you can always over-engineer it with a safety factor if desired.

It’s different when, in addition to simplifying the problem, you don’t even understand all the parameters of the problem in the first place, or how simplifying things will affect the interactions between the variables. You are guessing. And that’s just the model. What about the input data?

Have I collected enough data? Have I collected the proper data? Is my data collection scheme valid? Have I collected data for a long enough period to assure a proper baseline? Are the assumptions I made concerning my input data correct? Have I addressed the issue of “bad” data? Did a sensor malfunction? Was there something occurring at the collection site while I was collecting that was out of the ordinary? Is the collection site I selected the proper one for collecting the type of data I am trying to collect, or did I accidentally (or on purpose) site it in a location where some unrelated external factor is going to interfere with my observations? (For example, did I park my SUV with the motor running over the sensor I am using to collect CO2 emissions downwind from that electrical plant?)

The principle of GIGO applies here as well. And if I put bad data (or even good data) through an imperfect model, how can I expect the result to be a valid prediction of what is going to occur 10, 50, or 100 years in the future?

The doomsayers predicting global warming and dire consequences to follow cite “computer models” as the basis for their conclusions. You never hear anything about the models themselves. I for one would like to know a bit more about these models. Why? Because the decisions that are apt to be made based on the information these models are outputting will have profound effect on myself, my children, and their children for years to come. Because while bad decisions can certainly be made given good data, it is unlikely in the extreme that good decisions will be made given bad data.

If the models reflect reality, then I should be able to take past data, run them through the models, and have a result that reflects reality today. Of course, there is a problem here, in that the data I need to feed my models may not be available, because we weren’t collecting the necessary data back then. This means that not only do I have the problem of all the potential modeling errors enumerated above, but I also really have no way to validate my model in the first place. I can only feed it data and accept on faith that the results it is outputting are correct.

Of course, if I have fine-tuned the model to produce the results that I expected to see in the first place, then I probably don’t really have any trouble believing what it is telling me, do I?

Actually, the people predicting global warming are probably correct, just as the ones predicting global cooling 20-30 years ago were. The point is, we don’t live in a steady state environment. The planet has experienced periods of global warming and cooling in the past, some of them quite rapid, and will most probably continue to do so in the future. Despite what we like to think about our abilities to destroy the planet (and we can make some pretty awful messes), we are small potatoes when it comes to the forces of nature. A single volcano can spew out in hours what it would take decades for us to put into the atmosphere with our SUVs. And so far, our ability to predict, much less control or mitigate forces of nature are pretty poor.

Take a look at this link for a quick synopsis of what sort of climactic change has occurred on our planet for only the last 110,000 years or so; a mere drop in the bucket compared to its total age. And guess what? Unless our caveman ancestor’s were tooling around in their cars and trucks back then, they didn’t have much to do with it.

Please note; the article the link above points to does contain the obligatory politically-correct warning about how our activity might trigger some catastrophic climate change due to the “large quantities” of greenhouse gasses we are putting into the atmosphere. My purpose for linking to it, however, is to illustrate the past history of climactic change, over which we had no influence. Likewise, future climate change is not proof of a direct causal link to current human activity, nor is changing our activity a guarantor that future climate change will not occur.

The only thing that is certain is that change is certain. Steady-state climates only exist in air-conditioned spaces. And not even then if the electricity fails.

And please, don’t believe everything that comes out of a computer. Computers are tools, not oracles (the name of a certain database manufacturer notwithstanding.) Do not blindly accept that because someone claims that a computer model predicts something that you can take it to the bank. Is your weatherman infallible? Mine sure isn’t!

And above all, use your brain! That’s what it’s there for after all. If something doesn’t pass the smell test, then it’s probably a good idea to question it, at least until you can do enough research to settle the issue in your mind one way or the other. And even after it’s settled, keep an open mind to changes based on new information. Believing the Earth to be flat, lacking evidence to the contrary, makes you ignorant. Persisting on believing the Earth is flat, after it can be demonstrated beyond a reasonable doubt that it is round, does not make it flat; it just makes you look stupid. Ignorance is curable. Stupidity is not.