Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Wednesday, August 3, 2011

The myth of certainty

Tropical Storm Emily is currently spinning in the Atlantic Ocean near the Grenadines, and in a few days may be a threat to the coastal United States.  The various computer models used to predict the formation and movement of storms show an uncertain forecast, both in trajectory and strengthening; landfall is predicted to be anywhere between North Carolina and south Florida, and in fact some models show it veering off into the open ocean and not hitting the mainland at all.  And depending on its path, it could weaken (especially if if makes a direct hit on the Dominican Republic) or strengthen (if it lingers over the warm waters of the Caribbean).

For some people, this kind of uncertainty is distressing.  A commenter on an online news story about Emily posted, "I should become a weather forecaster.  It's the only job where you can admit that you are as likely to be right as a flip of a coin (50% chance of rain) or talk on and on about the fact that you really don't know where a hurricane is going to go, and you still get paid."

Meteorology is especially open to these kinds of criticisms -- despite vast improvements in weather and climate modeling, the Earth's weather is a tremendously complex system, sensitive to large numbers of initial conditions, and models are still fraught with inaccuracies.  However, you hear the same kind of accusations levied against science in general.  I've had students ask me why we are bothering to learn science "when it could all be proven wrong ten years from now."  The findings of nutrition scientists are ridiculed as summing up to "everything you eat can kill you."  Evolutionary biologists are dismissed as not knowing what they're doing when a new discovery changes our understanding of the relationships between prehistoric species.  Physicists, especially those who study quantum phenomena, are the worst; their models, so counterintuitive to what we see in the macroscopic world, have generated comments such as the one I saw appended to an article on the Large Hadron Collider, that "these guys spend billions of taxpayer dollars to play around and then write science fiction."

All of this comes, I think, from three problems with the public perception of science.

The first is its portrayal in the media, an issue with which I dealt in a recent post, and which I will not go into any further here.

The second is how science is taught in public school.  It is regrettably uncommon to see science taught as a process; that it is a cumulative, and changing, way of understanding based upon the total mass of data we have at present.  Too often, science is taught as lists of vocabulary words and mathematical equations -- neither of which portray science accurately, as a fluid, responsive way of modeling the world.  Most people, therefore, grow up with the idea that scientific understanding shouldn't change, any more than the definition of "dog" should change, or the solution to an algebraic equation should change.

The third reason, however, is the one I want to look at more carefully.  It's the myth that science should provide certainty.  The resentment of people against weather forecasting comes, I think, from the idea that knowledge should be certain.  You either know something, or you don't, right?  Either Tropical Storm Emily will hit Charleston, South Carolina, or else it won't; and if you're smart enough, you should be able to figure that out.  And if you meteorologists can't figure that out, then what the hell are we paying you for?

It's this attitude that generates my student's frustration, that science could change enough that our current textbooks could be entirely wrong ten years from now.  And this brings me to the crux of the matter, which is that people don't understand the idea of "levels of confidence."

How confident are scientists in various models or theories?  Well, it varies, and it's not an either/or matter (either it's all right, or it's all wrong).  Some models have very high levels of confidence.  The atomic theory (the basis of chemistry) and evolutionary theory (the basis of biology) are supported by such vast mountains of data that their likelihood of being substantially wrong is nearly zero.  Any changes to be made to either of those models will be at the level of details.  Other models, such as climate modeling and weather forecasting, are still subject to considerable uncertainty even as to the rules by which the system interacts and responds; predictions made here are made with less confidence, and the rules of the science could well change as we gather more data.  Finally, some models, for example string theory, are still only interesting proposals, and there is not nearly enough data yet for a determination to be made.  In ten years, it could be that physics textbooks will include whole chapters on string theory and the studies that validate it, or it might have gone the way of the ether and be relegated to the scrap pile of ideas that went unsupported by the evidence.  It's simply too early to tell.

The problem is, that's not enough for a lot of people.  They want certainty, as if it's honestly even possible.  To them, even the uncertainties inherent in the best-supported models are unacceptable; if there are any questions left, then it means that "scientists don't really know."  And for the models lower on the confidence-level scale, the whole thing appears like nothing more than guesswork.  Never mind that our improved ability to forecast hurricane trajectories has saved thousands of lives -- compare our current knowledge of storm tracks to what happened in Galveston in 1900, when a hurricane barreled into the coast, seemingly out of nowhere, costing more than 8,000 lives.

Uncertainty at some level is built into science as a way of knowing; there's no escaping the fact that new data can trash old theories.  But "uncertainty" doesn't imply that scientists don't know what they're doing, or that tomorrow we'll be throwing away all the chemistry texts because they suddenly decided that the alchemists were right, after all.  As more data is collected, and models and theories are refined, the uncertainty diminishes.  And even though it can never reach zero, it can reach low enough levels that a model becomes "robust" -- able to make accurate predictions in almost all cases.

And even if meteorology hasn't quite gotten there yet, it's still a damn sight better than it was a hundred years ago, when hurricanes could hit coastlines before warnings could be issued.  The people who believe in the myth of certainty in science might do well to consider the difference between our understanding now and our understanding a century ago -- before they make proclamations about scientists not deserving to get paid for what they do.

1 comment:

  1. A few important points to remember...First, as you pointed out, at least climatologists are able to make modeled predictions as to a hurricane's past, rather tha not detecting them at all. Second, part of the uncertainty is intentional, as it comes from the results of multiple working models that are based on differing assumptions about natural mechanisms and factors. Such intentional uncertainty is a good thing in that it keeps climate scientists from falling into the trap of being biases by one model. Third, people often emit misguided complains about climate prediction when, in fact, their complaint is actually about the shortcomings of political and civil responses to climate events.