Complexity, Causation, and Crucial Failures of Science
The confusion of correlation with causation is a common mistake among journalists, celebrities, academics, and political activists -- not to mention ordinary people. It is difficult to blame one for making this mistake, since modern media -- even much of "scientific media" -- is drowning in this error.
If you do not have a grip on the distinction between correlation and causation, then you do not have a prayer of understanding the deeper issues that will be touched on here. Therefore, we will take a look at "Hill's Criteria of Causation," which are applied to possible causal links in the field of medicine and public health.
1. Temporal Relationship:This is basic stuff which most basic and clinical scientists and physicians studied in the early stages of their training. But there is little evidence that many science journalists have given these criteria any thought, to judge by what they write.
Exposure always precedes the outcome. If factor "A" is believed to cause a disease, then it is clear that factor "A" must necessarily always precede the occurrence of the disease. This is the only absolutely essential criterion. This criterion negates the validity of all functional explanations used in the social sciences, including the functionalist explanations that dominated British social anthropology for so many years and the ecological functionalism that pervades much American cultural ecology.
2. Strength:
This is defined by the size of the association as measured by appropriate statistical tests. The stronger the association, the more likely it is that the relation of "A" to "B" is causal. For example, the more highly correlated hypertension is with a high sodium diet, the stronger is the relation between sodium and hypertension. Similarly, the higher the correlation between patrilocal residence and the practice of male circumcision, the stronger is the relation between the two social practices.
3. Dose-Response Relationship:
An increasing amount of exposure increases the risk. If a dose-response relationship is present, it is strong evidence for a causal relationship. However, as with specificity (see below), the absence of a dose-response relationship does not rule out a causal relationship. A threshold may exist above which a relationship may develop. At the same time, if a specific factor is the cause of a disease, the incidence of the disease should decline when exposure to the factor is reduced or eliminated. An anthropological example of this would be the relationship between population growth and agricultural intensification. If population growth is a cause of agricultural intensification, then an increase in the size of a population within a given area should result in a commensurate increase in the amount of energy and resources invested in agricultural production. Conversely, when a population decrease occurs, we should see a commensurate reduction in the investment of energy and resources per acre. This is precisely what happened in Europe before and after the Black Plague. The same analogy can be applied to global temperatures. If increasing levels of CO2 in the atmosphere causes increasing global temperatures, then "other things being equal", we should see both a commensurate increase and a commensurate decrease in global temperatures following an increase or decrease respectively in CO2 levels in the atmosphere.
4. Consistency:
The association is consistent when results are replicated in studies in different settings using different methods. That is, if a relationship is causal, we would expect to find it consistently in different studies and among different populations. This is why numerous experiments have to be done before meaningful statements can be made about the causal relationship between two or more factors. For example, it required thousands of highly technical studies of the relationship between cigarette smoking and cancer before a definitive conclusion could be made that cigarette smoking increases the risk of (but does not cause) cancer. Similarly, it would require numerous studies of the difference between male and female performance of specific behaviors by a number of different researchers and under a variety of different circumstances before a conclusion could be made regarding whether a gender difference exists in the performance of such behaviors.
5. Plausibility:
The association agrees with currently accepted understanding of pathological processes. In other words, there needs to be some theoretical basis for positing an association between a vector and disease, or one social phenomenon and another. One may, by chance, discover a correlation between the price of bananas and the election of dog catchers in a particular community, but there is not likely to be any logical connection between the two phenomena. On the other hand, the discovery of a correlation between population growth and the incidence of warfare among Yanomamo villages would fit well with ecological theories of conflict under conditions of increasing competition over resources. At the same time, research that disagrees with established theory is not necessarily false; it may, in fact, force a reconsideration of accepted beliefs and principles.
6. Consideration of Alternate Explanations:
In judging whether a reported association is causal, it is necessary to determine the extent to which researchers have taken other possible explanations into account and have effectively ruled out such alternate explanations. In other words, it is always necessary to consider multiple hypotheses before making conclusions about the causal relationship between any two items under investigation.
7. Experiment:
The condition can be altered (prevented or ameliorated) by an appropriate experimental regimen.
8. Specificity:
This is established when a single putative cause produces a specific effect. This is considered by some to be the weakest of all the criteria. The diseases attributed to cigarette smoking, for example, do not meet this criteria. When specificity of an association is found, it provides additional support for a causal relationship. However, absence of specificity in no way negates a causal relationship. Because outcomes (be they the spread of a disease, the incidence of a specific human social behavior or changes in global temperature) are likely to have multiple factors influencing them, it is highly unlikely that we will find a one-to-one cause-effect relationship between two phenomena. Causality is most often multiple. Therefore, it is necessary to examine specific causal relationships within a larger systemic perspective.
9. Coherence:
The association should be compatible with existing theory and knowledge. In other words, it is necessary to evaluate claims of causality within the context of the current state of knowledge within a given field and in related fields. What do we have to sacrifice about what we currently know in order to accept a particular claim of causality. What, for example, do we have to reject regarding our current knowledge in geography, physics, biology and anthropology in order to accept the Creationist claim that the world was created as described in the Bible a few thousand years ago? Similarly, how consistent are racist and sexist theories of intelligence with our current understanding of how genes work and how they are inherited from one generation to the next? However, as with the issue of plausibility, research that disagrees with established theory and knowledge are not automatically false. They may, in fact, force a reconsideration of accepted beliefs and principles. All currently accepted theories, including Evolution, Relativity and non-Malthusian population ecology, were at one time new ideas that challenged orthodoxy. Thomas Kuhn has referred to such changes in accepted theories as "Paradigm Shifts". _Hill's Criteria of Causation
All of the preceding is by way of introduction to the phenomenon where science gets bogged down by complexity and by a confusion of logical levels -- or a failure to recognise "emergent phenomena." (PDF)
An example of this type of science failure is presented in a Wired.com article written about a cholesterol drug which ended up making the heart disease in patients worse, rather than better -- even to the point of killing some of them. This happens sometimes in medicine, where all logic and data suggests that a treatment is most likely to be highly beneficial -- but it ends up being worthless or worse.
The story of torcetrapib is a tale of mistaken causation. Pfizer was operating on the assumption that raising levels of HDL cholesterol and lowering LDL would lead to a predictable outcome: Improved cardiovascular health. Less arterial plaque. Cleaner pipes. But that didn’t happen.
Such failures occur all the time in the drug industry. (According to one recent analysis, more than 40 percent of drugs fail Phase III clinical trials.) And yet there is something particularly disturbing about the failure of torcetrapib. After all, a bet on this compound wasn’t supposed to be risky. For Pfizer, torcetrapib was the payoff for decades of research. Little wonder that the company was so confident about its clinical trials, which involved a total of 25,000 volunteers. Pfizer invested more than $1 billion in the development of the drug and $90 million to expand the factory that would manufacture the compound. Because scientists understood the individual steps of the cholesterol pathway at such a precise level, they assumed they also understood how it worked as a whole.
This assumption—that understanding a system’s constituent parts means we also understand the causes within the system—is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients. And so the question of cholesterol—what is its relationship to heart disease?—becomes a predictable loop of proteins tweaking proteins, acronyms altering one another. Modern medicine is particularly reliant on this approach. Every year, nearly $100 billion is invested in biomedical research in the US, all of it aimed at teasing apart the invisible bits of the body. We assume that these new details will finally reveal the causes of illness, pinning our maladies on small molecules and errant snippets of DNA. Once we find the cause, of course, we can begin working on a cure.
...The truth is, our stories about causation are shadowed by all sorts of mental shortcuts. Most of the time, these shortcuts work well enough. They allow us to hit fastballs, discover the law of gravity, and design wondrous technologies. However, when it comes to reasoning about complex systems—say, the human body—these shortcuts go from being slickly efficient to outright misleading.
Consider a set of classic experiments designed by Belgian psychologist Albert Michotte, first conducted in the 1940s. The research featured a series of short films about a blue ball and a red ball. In the first film, the red ball races across the screen, touches the blue ball, and then stops. The blue ball, meanwhile, begins moving in the same basic direction as the red ball. When Michotte asked people to describe the film, they automatically lapsed into the language of causation. The red ball hit the blue ball, which caused it to move.
This is known as the launching effect, and it’s a universal property of visual perception. Although there was nothing about causation in the two-second film—it was just a montage of animated images—people couldn’t help but tell a story about what had happened. They translated their perceptions into causal beliefs.
...There are two lessons to be learned from these experiments. The first is that our theories about a particular cause and effect are inherently perceptual, infected by all the sensory cheats of vision. (Michotte compared causal beliefs to color perception: We apprehend what we perceive as a cause as automatically as we identify that a ball is red.) While Hume was right that causes are never seen, only inferred, the blunt truth is that we can’t tell the difference. And so we look at moving balls and automatically see causes, a melodrama of taps and collisions, chasing and fleeing.
The second lesson is that causal explanations are oversimplifications. This is what makes them useful—they help us grasp the world at a glance. For instance, after watching the short films, people immediately settled on the most straightforward explanation for the ricocheting objects. Although this account felt true, the brain wasn’t seeking the literal truth—it just wanted a plausible story that didn’t contradict observation.
This mental approach to causality is often effective, which is why it’s so deeply embedded in the brain. However, those same shortcuts get us into serious trouble in the modern world when we use our perceptual habits to explain events that we can’t perceive or easily understand. Rather than accept the complexity of a situation—say, that snarl of causal interactions in the cholesterol pathway—we persist in pretending that we’re staring at a blue ball and a red ball bouncing off each other. There’s a fundamental mismatch between how the world works and how we think about the world.
...Although modern pharmaceuticals are supposed to represent the practical payoff of basic research, the R&D to discover a promising new compound now costs about 100 times more (in inflation-adjusted dollars) than it did in 1950. (It also takes nearly three times as long.) This trend shows no sign of letting up: Industry forecasts suggest that once failures are taken into account, the average cost per approved molecule will top $3.8 billion by 2015. What’s worse, even these “successful” compounds don’t seem to be worth the investment. According to one internal estimate, approximately 85 percent of new prescription drugs approved by European regulators provide little to no new benefit. We are witnessing Moore’s law in reverse.
...Given the increasing difficulty of identifying and treating the causes of illness, it’s not surprising that some companies have responded by abandoning entire fields of research. Most recently, two leading drug firms, AstraZeneca and GlaxoSmithKline, announced that they were scaling back research into the brain. The organ is simply too complicated, too full of networks we don’t comprehend. _Wired
The author of the Wired article, Jonah Lehrer, provides other examples where medical science has foundered on the rocks of complexity, in the full article. He also alludes to philosophical theories of causation -- particularly the ideas of David Hume -- to help the reader to get a better idea of the scale of the problem.
Most people have not thought too deeply about cause and effect. For most lives, such deep thinking is completely unnecessary -- and probably counter-productive. But if one wants to better understand what is happening when science butts its head against the wall -- as in the examples given by Jonah Lehrer -- such thinking becomes unavoidable. Here are a couple of web-based overviews which you may wish to look at after browsing through the Wikipedia entry "Causality:"
A brief overview of philosophical idas about causality from informationphilosopher.com
A look at the metaphysics of causation from Stanford Encyclopedia of Philosophy
Here is the problem that human science faces, as I see it: We do not truly understand the mechanisms of what is happening within us and around us at any scale, but we wish to. In order to understand these underlying mechanisms -- in the absence of a valid overarching theory -- we are forced to collect a large amount of data, which we can only correlate in fairly crude ways.
Even as our computational machines improve along with our methods of correlation, we must still face up to the fact that "correlation is not causation." And even as we approach theories, hypotheses, and explanations which appear to be valid on one logical level, we are liable to be completely stymied when these explanations fail on higher and more emergent levels.
The intractability of many problems in science forced the reluctant, back-door acceptance by part of mainstream science, of some of the ideas of complexity, chaos, and paradoxical causality.
But most "bad science" of today is merely the failure of scientists to scrupulously stick to the rules of the scientific method. In other words, modern climate science is not unreliable and untrustworthy due to the chaotic nature of climate. Modern climate science is untrustworthy because the most powerful and best-connected of the group are willing to lie, obscure, strong-arm, and cover up the many weaknesses of their arguments and theories in order to enlarge their influence and power. That has everything to do with human weakness, greed, and immorality, and nothing to do with deep level difficulties in science.
Many observers of science -- and even many scientists -- feel that philosophy has been superceded by the power of modern science. But that is not actually true. In fact, the more powerful the science, the more it needs a sound philosophical underpinning. But that is easier said than done.
More on this topic -- including an attempt to clarify many of the most critical ideas in simpler language -- at a later date.
Labels: paradigm, philosophy, science
0 Comments:
Post a Comment
“During times of universal deceit, telling the truth becomes a revolutionary act” _George Orwell
<< Home