Many of the comments I've seen "debunking" the science behind climate change are based on criticism of "computer models." Such people need to stay off of bridges and out of buildings, cars, ships, and airplanes. All are now designed using computer models. So, just what is a computer model, what can one do, when is one useful, and how can one go wrong?
Let's start with "what is a model?" When a net force acts on an object, that object accelerates. We learn in high school (and I've used repeatedly in this blog) that "F=ma." This constitutes, in a broad sense, a model. After all, the universe is not a calculator and F=ma is a synthesis of the way people (starting before Newton) believe material objects behave. It's been found to be useful and successful but may need to be modified, for example, in situations where the general theory of relativity is applicable.
As it happens, simulations using this model can be run on pencil and paper, with slide rules, or with a calculator. Nevertheless, the equation takes our best understanding of the essentials of a physical principle and, given specific input, will provide predictions of the output. But here's the point: science is about modeling. The world is too complex to track and measure every degree of freedom.
The spreadsheet I used to analyze acceleration is also a model. Again, it involves assumptions, separation of what I believed to be essential vs. non-essential parameters, measurements, and estimates. Like all models, it's subject to error if I've made a mistake in any of these components.
How do I check? In such a simple model, it's fairly straightforward. Does it make sense? How do the orders of magnitude compare? Does it provide reasonable numbers in limiting cases? Does it make predictions that match measured results? In my case, I believe the answers to this question is "yes, to within the accuracy that I can measure."
So what about computer modeling of climate? The models are built by doing what I did for the acceleration of my car, i.e., culling non-essential (or non-measurable) parameters, applying basic physical principles (Newton's Laws, conservation laws, thermodynamic laws, transport and transfer laws, etc.) to initial conditions, and evaluating the output. Like my model, it's an iterative process - the model is tested with input conditions for which output conditions are known and a determination if adjustments are required is made.
Of course, the more assumptions included and the larger the input data set, the more complex (and potentially though not necessarily the more accurate) is the model. The so-called "global circulation models" (GCM's) are quite complex. But as with any model, confidence in their accuracy is gained by comparison with known initial and final conditions. Here is a summary of successes of the GCM's.
The point here is that naive criticism of the process of modeling is completely misguided. Such critics use the results of successful "computer modeling" every day. If one wishes to criticize the models, it must be based on the factors above: wrong choice of parameters; inaccurate measurement of initial conditions, etc. "How can we believe a computer model?" is a question indicative of blissful ignorance. As stated (and demonstrated) in the post cited above, there are improvements to be made, but the current state of the art appears to be very good. And when it comes to physics, models are all we have.